Nov 24 17:49:19 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 17:49:19 crc restorecon[4762]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:19 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 17:49:20 crc restorecon[4762]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 17:49:20 crc restorecon[4762]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 17:49:21 crc kubenswrapper[4768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 17:49:21 crc kubenswrapper[4768]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 17:49:21 crc kubenswrapper[4768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 17:49:21 crc kubenswrapper[4768]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 17:49:21 crc kubenswrapper[4768]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 17:49:21 crc kubenswrapper[4768]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.650668 4768 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.653968 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.653987 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.653992 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.653996 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654000 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654004 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654009 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654013 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654017 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654021 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654025 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654028 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654032 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654037 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654051 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654056 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654060 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654064 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654067 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654070 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654074 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654077 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654081 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654084 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654088 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654091 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654094 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654098 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654101 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654105 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654108 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654112 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654116 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654119 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654123 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654126 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654130 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654135 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654140 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654145 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654149 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654153 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654156 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654159 4768 feature_gate.go:330] unrecognized feature gate: Example Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654163 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654166 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654170 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654174 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654180 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654185 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654191 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654196 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654201 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654206 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654210 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654213 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654217 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654221 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654225 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654230 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654234 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654238 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654241 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654245 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654249 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654252 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654256 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654260 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654264 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654270 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.654275 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657725 4768 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657749 4768 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657759 4768 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657766 4768 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657773 4768 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657778 4768 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657785 4768 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657790 4768 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657795 4768 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657800 4768 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657804 4768 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657809 4768 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657814 4768 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657818 4768 flags.go:64] FLAG: --cgroup-root="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657823 4768 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657827 4768 flags.go:64] FLAG: --client-ca-file="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657831 4768 flags.go:64] FLAG: --cloud-config="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657835 4768 flags.go:64] FLAG: --cloud-provider="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657839 4768 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657845 4768 flags.go:64] FLAG: --cluster-domain="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657849 4768 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657853 4768 flags.go:64] FLAG: --config-dir="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657858 4768 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657863 4768 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657868 4768 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657873 4768 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657877 4768 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657882 4768 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657887 4768 flags.go:64] FLAG: --contention-profiling="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657891 4768 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657896 4768 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657902 4768 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657907 4768 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657913 4768 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657918 4768 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657923 4768 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657928 4768 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657933 4768 flags.go:64] FLAG: --enable-server="true" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657937 4768 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657944 4768 flags.go:64] FLAG: --event-burst="100" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657948 4768 flags.go:64] FLAG: --event-qps="50" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657952 4768 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657957 4768 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657963 4768 flags.go:64] FLAG: --eviction-hard="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657969 4768 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657973 4768 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657978 4768 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657982 4768 flags.go:64] FLAG: --eviction-soft="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657986 4768 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657991 4768 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.657995 4768 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658000 4768 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658004 4768 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658008 4768 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658013 4768 flags.go:64] FLAG: --feature-gates="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658018 4768 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658022 4768 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658027 4768 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658032 4768 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658039 4768 flags.go:64] FLAG: --healthz-port="10248" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658044 4768 flags.go:64] FLAG: --help="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658048 4768 flags.go:64] FLAG: --hostname-override="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658054 4768 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658059 4768 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658064 4768 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658068 4768 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658072 4768 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658077 4768 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658081 4768 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658085 4768 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658090 4768 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658094 4768 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658099 4768 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658103 4768 flags.go:64] FLAG: --kube-reserved="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658107 4768 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658111 4768 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658115 4768 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658119 4768 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658123 4768 flags.go:64] FLAG: --lock-file="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658128 4768 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658132 4768 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658136 4768 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658143 4768 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658147 4768 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658151 4768 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658155 4768 flags.go:64] FLAG: --logging-format="text" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658160 4768 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658165 4768 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658169 4768 flags.go:64] FLAG: --manifest-url="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658173 4768 flags.go:64] FLAG: --manifest-url-header="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658180 4768 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658185 4768 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658192 4768 flags.go:64] FLAG: --max-pods="110" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658198 4768 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658203 4768 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658208 4768 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658214 4768 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658219 4768 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658223 4768 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658227 4768 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658238 4768 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658242 4768 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658247 4768 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658251 4768 flags.go:64] FLAG: --pod-cidr="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658255 4768 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658262 4768 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658266 4768 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658271 4768 flags.go:64] FLAG: --pods-per-core="0" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658275 4768 flags.go:64] FLAG: --port="10250" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658279 4768 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658283 4768 flags.go:64] FLAG: --provider-id="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658288 4768 flags.go:64] FLAG: --qos-reserved="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658292 4768 flags.go:64] FLAG: --read-only-port="10255" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658297 4768 flags.go:64] FLAG: --register-node="true" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658301 4768 flags.go:64] FLAG: --register-schedulable="true" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658305 4768 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658313 4768 flags.go:64] FLAG: --registry-burst="10" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658318 4768 flags.go:64] FLAG: --registry-qps="5" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658322 4768 flags.go:64] FLAG: --reserved-cpus="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658326 4768 flags.go:64] FLAG: --reserved-memory="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658332 4768 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658336 4768 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658341 4768 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658345 4768 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658351 4768 flags.go:64] FLAG: --runonce="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658356 4768 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658360 4768 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658364 4768 flags.go:64] FLAG: --seccomp-default="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658369 4768 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658373 4768 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658378 4768 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658382 4768 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658387 4768 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658392 4768 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658396 4768 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658400 4768 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658405 4768 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658409 4768 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658413 4768 flags.go:64] FLAG: --system-cgroups="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658417 4768 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658424 4768 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658428 4768 flags.go:64] FLAG: --tls-cert-file="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658432 4768 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658437 4768 flags.go:64] FLAG: --tls-min-version="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658442 4768 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658447 4768 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658451 4768 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658455 4768 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658460 4768 flags.go:64] FLAG: --v="2" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658466 4768 flags.go:64] FLAG: --version="false" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658471 4768 flags.go:64] FLAG: --vmodule="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658476 4768 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.658506 4768 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660370 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660379 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660383 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660387 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660391 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660396 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660400 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660403 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660407 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660410 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660414 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660417 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660421 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660425 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660428 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660431 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660435 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660439 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660443 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660446 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660450 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660454 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660458 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660462 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660466 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660469 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660473 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660477 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660494 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660498 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660501 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660505 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660533 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660537 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660540 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660544 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660547 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660551 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660555 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660558 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660563 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660568 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660572 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660575 4768 feature_gate.go:330] unrecognized feature gate: Example Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660579 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660582 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660586 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660589 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660593 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660596 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660600 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660604 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660608 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660611 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660615 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660619 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660622 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660625 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660629 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660632 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660636 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660639 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660643 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660648 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660652 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660656 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660661 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660666 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660670 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660674 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.660677 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.662172 4768 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.678711 4768 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.678771 4768 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.678910 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.678923 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.678933 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.678942 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.678951 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.678960 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.678968 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.678977 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.678984 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.678996 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679009 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679020 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679029 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679038 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679047 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679055 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679063 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679070 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679078 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679085 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679094 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679102 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679109 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679119 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679126 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679134 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679142 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679149 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679157 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679165 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679173 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679181 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679192 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679200 4768 feature_gate.go:330] unrecognized feature gate: Example Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679207 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679215 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679223 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679230 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679238 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679245 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679253 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679262 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679269 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679278 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679289 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679298 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679308 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679317 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679326 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679337 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679348 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679357 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679365 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679373 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679380 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679388 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679397 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679405 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679412 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679421 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679432 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679442 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679451 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679459 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679468 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679476 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679526 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679538 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679547 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679555 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679562 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.679577 4768 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679825 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679839 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679849 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679859 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679871 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679879 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679886 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679894 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679902 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679911 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679920 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679928 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679935 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679943 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679951 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679960 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679968 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679975 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679983 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679991 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.679999 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680007 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680014 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680023 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680030 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680039 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680049 4768 feature_gate.go:330] unrecognized feature gate: Example Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680058 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680067 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680076 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680084 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680092 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680102 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680111 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680120 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680128 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680139 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680147 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680155 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680163 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680171 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680179 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680187 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680195 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680203 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680210 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680218 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680226 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680234 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680241 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680249 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680256 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680264 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680272 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680281 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680291 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680301 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680311 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680323 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680332 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680341 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680350 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680360 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680369 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680378 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680389 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680398 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680408 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680417 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680427 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.680439 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.680453 4768 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.680889 4768 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.686238 4768 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.686350 4768 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.688453 4768 server.go:997] "Starting client certificate rotation" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.688491 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.690525 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-28 08:15:51.027194578 +0000 UTC Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.690688 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 86h26m29.336510672s for next certificate rotation Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.718191 4768 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.720494 4768 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.742125 4768 log.go:25] "Validated CRI v1 runtime API" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.779044 4768 log.go:25] "Validated CRI v1 image API" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.782278 4768 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.795127 4768 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-17-44-12-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.795453 4768 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.822642 4768 manager.go:217] Machine: {Timestamp:2025-11-24 17:49:21.817739547 +0000 UTC m=+0.678321374 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f215b4ef-9be9-4deb-ac5d-b54dee019f27 BootID:40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:70:7e:eb Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:70:7e:eb Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:7b:2f:bc Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:dd:f3:df Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:20:e7:6a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c8:98:f4 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:d4:e1:f9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0a:5e:36:5d:8d:d5 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b6:24:97:26:bf:d2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.823215 4768 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.823536 4768 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.826061 4768 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.826426 4768 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.826603 4768 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.827054 4768 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.827113 4768 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.827820 4768 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.827901 4768 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.828152 4768 state_mem.go:36] "Initialized new in-memory state store" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.828272 4768 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.833371 4768 kubelet.go:418] "Attempting to sync node with API server" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.833460 4768 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.833570 4768 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.833644 4768 kubelet.go:324] "Adding apiserver pod source" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.833720 4768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.839785 4768 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.841068 4768 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.842192 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:21 crc kubenswrapper[4768]: E1124 17:49:21.842541 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.842283 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:21 crc kubenswrapper[4768]: E1124 17:49:21.842588 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.844975 4768 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.846891 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.846919 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.846929 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.846937 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.846953 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.846963 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.846973 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.846988 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.846998 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.847008 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.847021 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.847030 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.847049 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.847530 4768 server.go:1280] "Started kubelet" Nov 24 17:49:21 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.849746 4768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.850374 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.850420 4768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.850753 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.850825 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 19:57:33.28829056 +0000 UTC Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.850891 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 410h8m11.43740309s for next certificate rotation Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.851252 4768 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.851180 4768 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.851966 4768 factory.go:55] Registering systemd factory Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.851988 4768 factory.go:221] Registration of the systemd container factory successfully Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.852276 4768 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 17:49:21 crc kubenswrapper[4768]: E1124 17:49:21.852285 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.852306 4768 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.852296 4768 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.852410 4768 factory.go:153] Registering CRI-O factory Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.852419 4768 factory.go:221] Registration of the crio container factory successfully Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.852498 4768 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.852517 4768 factory.go:103] Registering Raw factory Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.852532 4768 manager.go:1196] Started watching for new ooms in manager Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.852880 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:21 crc kubenswrapper[4768]: E1124 17:49:21.852935 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.853154 4768 manager.go:319] Starting recovery of all containers Nov 24 17:49:21 crc kubenswrapper[4768]: E1124 17:49:21.854863 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="200ms" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.862340 4768 server.go:460] "Adding debug handlers to kubelet server" Nov 24 17:49:21 crc kubenswrapper[4768]: E1124 17:49:21.861250 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.58:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b029f34608681 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 17:49:21.847477889 +0000 UTC m=+0.708059686,LastTimestamp:2025-11-24 17:49:21.847477889 +0000 UTC m=+0.708059686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.867882 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.867936 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.867949 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.867958 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.867971 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.867982 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.867991 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868000 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868010 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868018 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868026 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868037 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868046 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868057 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868067 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868078 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868089 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868100 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868110 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868119 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868128 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868136 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868147 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868161 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868173 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868185 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868227 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868239 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868250 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868260 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868269 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868279 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868289 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868300 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868310 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868319 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868330 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868340 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868351 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868360 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868368 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868377 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868386 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868396 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868404 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868414 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868423 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868432 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868441 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868450 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868459 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868469 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868497 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868512 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868521 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868530 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868539 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868546 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868555 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868564 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868575 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868587 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868595 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868603 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.868611 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870392 4768 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870421 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870436 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870451 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870462 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870475 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870503 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870541 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870554 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870569 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870582 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870594 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870607 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870619 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870632 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870643 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870655 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870667 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870679 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870705 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870718 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870730 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870745 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870757 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870772 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870784 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870797 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870808 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870820 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870832 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870844 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870878 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870891 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870904 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870916 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870930 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870941 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870952 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870963 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870975 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.870992 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871006 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871019 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871032 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871046 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871059 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871074 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871086 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871099 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871112 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871124 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871138 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871151 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871163 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871177 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871189 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871201 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871215 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871229 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871244 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871256 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871274 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871288 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871300 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871313 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871326 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871341 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871355 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871368 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871381 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871393 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871406 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871418 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871431 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871444 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871457 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871469 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871587 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871604 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871619 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871631 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871645 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871657 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871672 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871686 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871697 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871710 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871723 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871735 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871758 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871771 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871782 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871795 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871811 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871825 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871841 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871856 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871870 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871883 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871898 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871912 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871925 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871939 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871953 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871968 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871981 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.871994 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872007 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872021 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872033 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872043 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872054 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872064 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872073 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872083 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872095 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872105 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872114 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872124 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872134 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872144 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872154 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872165 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872179 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872188 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872198 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872207 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872221 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872230 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872267 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872278 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872298 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872311 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872323 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872336 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872348 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872360 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872371 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872382 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872394 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872405 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872416 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872426 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872436 4768 reconstruct.go:97] "Volume reconstruction finished" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.872444 4768 reconciler.go:26] "Reconciler: start to sync state" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.873860 4768 manager.go:324] Recovery completed Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.884067 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.885936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.885971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.885982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.889762 4768 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.889781 4768 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.889858 4768 state_mem.go:36] "Initialized new in-memory state store" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.895216 4768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.896992 4768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.897032 4768 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.897063 4768 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 17:49:21 crc kubenswrapper[4768]: E1124 17:49:21.897116 4768 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 17:49:21 crc kubenswrapper[4768]: W1124 17:49:21.900218 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:21 crc kubenswrapper[4768]: E1124 17:49:21.900312 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.913588 4768 policy_none.go:49] "None policy: Start" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.914570 4768 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.914600 4768 state_mem.go:35] "Initializing new in-memory state store" Nov 24 17:49:21 crc kubenswrapper[4768]: E1124 17:49:21.952640 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.972481 4768 manager.go:334] "Starting Device Plugin manager" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.972555 4768 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.972569 4768 server.go:79] "Starting device plugin registration server" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.973021 4768 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.973039 4768 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.973726 4768 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.973833 4768 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.973845 4768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 17:49:21 crc kubenswrapper[4768]: E1124 17:49:21.979095 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.997188 4768 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.997268 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.998367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.998430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.998441 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.998667 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.998812 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:21 crc kubenswrapper[4768]: I1124 17:49:21.998849 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.001141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.001164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.001173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.001347 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.001361 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.001425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.001431 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.001459 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.001438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.002189 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.002214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.002225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.003806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.003833 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.003844 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.003948 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.004084 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.004116 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.004738 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.004778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.004788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.005021 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.005046 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.005057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.005160 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.005394 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.005480 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.005861 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.005898 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.005914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.006193 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.006243 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.006591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.006638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.006655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.006922 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.006951 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.006962 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:22 crc kubenswrapper[4768]: E1124 17:49:22.055569 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="400ms" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.073356 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.073819 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.073840 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.073859 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.073961 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074033 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074084 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074133 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074166 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074197 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074230 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074262 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074295 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074314 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074328 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074347 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074362 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074393 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074359 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.074509 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: E1124 17:49:22.074936 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.175650 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.175725 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.175759 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.175794 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.175811 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.175839 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.175858 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.175904 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.175884 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.175936 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.175958 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176040 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176042 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176119 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176123 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176203 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176255 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176349 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176357 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176383 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176423 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176517 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176478 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176569 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176634 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176667 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176689 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176732 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176763 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.176883 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.275296 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.276970 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.277025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.277062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.277099 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 17:49:22 crc kubenswrapper[4768]: E1124 17:49:22.277667 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.332244 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.338338 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.352573 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.361760 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.365881 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 17:49:22 crc kubenswrapper[4768]: W1124 17:49:22.394699 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-052775cf9ba10d4eb57071787f91b6afb77ddcb525adb2db4f32257b6e76b8c6 WatchSource:0}: Error finding container 052775cf9ba10d4eb57071787f91b6afb77ddcb525adb2db4f32257b6e76b8c6: Status 404 returned error can't find the container with id 052775cf9ba10d4eb57071787f91b6afb77ddcb525adb2db4f32257b6e76b8c6 Nov 24 17:49:22 crc kubenswrapper[4768]: W1124 17:49:22.395878 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a8cf965780fc10c45011bd9cbe4f7fd40cd9d8f87dde2d993caac8b55f004995 WatchSource:0}: Error finding container a8cf965780fc10c45011bd9cbe4f7fd40cd9d8f87dde2d993caac8b55f004995: Status 404 returned error can't find the container with id a8cf965780fc10c45011bd9cbe4f7fd40cd9d8f87dde2d993caac8b55f004995 Nov 24 17:49:22 crc kubenswrapper[4768]: W1124 17:49:22.400385 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c416bfe2f95dbe991e2855e75184679e3be81e0cc843d604fffd2db373be9c71 WatchSource:0}: Error finding container c416bfe2f95dbe991e2855e75184679e3be81e0cc843d604fffd2db373be9c71: Status 404 returned error can't find the container with id c416bfe2f95dbe991e2855e75184679e3be81e0cc843d604fffd2db373be9c71 Nov 24 17:49:22 crc kubenswrapper[4768]: W1124 17:49:22.401515 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d058d188c17a6bafa70e70ef46455a5f1b286f336016d657ec1e1c2977c1a779 WatchSource:0}: Error finding container d058d188c17a6bafa70e70ef46455a5f1b286f336016d657ec1e1c2977c1a779: Status 404 returned error can't find the container with id d058d188c17a6bafa70e70ef46455a5f1b286f336016d657ec1e1c2977c1a779 Nov 24 17:49:22 crc kubenswrapper[4768]: W1124 17:49:22.405809 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e05c7dd981b5b2b45b13f553ffb88fe148981187daceddcc890b141ffab2f262 WatchSource:0}: Error finding container e05c7dd981b5b2b45b13f553ffb88fe148981187daceddcc890b141ffab2f262: Status 404 returned error can't find the container with id e05c7dd981b5b2b45b13f553ffb88fe148981187daceddcc890b141ffab2f262 Nov 24 17:49:22 crc kubenswrapper[4768]: E1124 17:49:22.456697 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="800ms" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.678163 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.679956 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.680008 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.680019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.680056 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 17:49:22 crc kubenswrapper[4768]: E1124 17:49:22.680540 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Nov 24 17:49:22 crc kubenswrapper[4768]: W1124 17:49:22.686704 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:22 crc kubenswrapper[4768]: E1124 17:49:22.686794 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 17:49:22 crc kubenswrapper[4768]: W1124 17:49:22.712738 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:22 crc kubenswrapper[4768]: E1124 17:49:22.712799 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.852211 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.901354 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d058d188c17a6bafa70e70ef46455a5f1b286f336016d657ec1e1c2977c1a779"} Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.902283 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c416bfe2f95dbe991e2855e75184679e3be81e0cc843d604fffd2db373be9c71"} Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.903198 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a8cf965780fc10c45011bd9cbe4f7fd40cd9d8f87dde2d993caac8b55f004995"} Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.904158 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"052775cf9ba10d4eb57071787f91b6afb77ddcb525adb2db4f32257b6e76b8c6"} Nov 24 17:49:22 crc kubenswrapper[4768]: I1124 17:49:22.906546 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e05c7dd981b5b2b45b13f553ffb88fe148981187daceddcc890b141ffab2f262"} Nov 24 17:49:23 crc kubenswrapper[4768]: W1124 17:49:23.184438 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:23 crc kubenswrapper[4768]: E1124 17:49:23.184558 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 17:49:23 crc kubenswrapper[4768]: E1124 17:49:23.258302 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="1.6s" Nov 24 17:49:23 crc kubenswrapper[4768]: W1124 17:49:23.313214 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:23 crc kubenswrapper[4768]: E1124 17:49:23.313324 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.481731 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.482911 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.482949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.482960 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.482986 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 17:49:23 crc kubenswrapper[4768]: E1124 17:49:23.483585 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.851586 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.911505 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a"} Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.911561 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c"} Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.911573 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316"} Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.911583 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54"} Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.911629 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.915337 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802" exitCode=0 Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.915543 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802"} Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.915656 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.916867 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.916937 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.916965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.917939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.917984 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.917998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.919159 4768 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b" exitCode=0 Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.919229 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b"} Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.919557 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.919762 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.920526 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.920567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.920602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.921523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.921787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.921926 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.922211 4768 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="84a463a138019bef8b5c936e83f9d0bd1713b4e2440cea5c8f21b80a7a329619" exitCode=0 Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.922240 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"84a463a138019bef8b5c936e83f9d0bd1713b4e2440cea5c8f21b80a7a329619"} Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.922687 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.923990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.924029 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.924045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.924087 4768 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="115aa8b11d06015e075ecd057cebfeb48e8b48dabf4dcde085db58e7c9bfef63" exitCode=0 Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.924120 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"115aa8b11d06015e075ecd057cebfeb48e8b48dabf4dcde085db58e7c9bfef63"} Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.924278 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.925449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.925479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:23 crc kubenswrapper[4768]: I1124 17:49:23.925506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.851770 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:24 crc kubenswrapper[4768]: E1124 17:49:24.859323 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="3.2s" Nov 24 17:49:24 crc kubenswrapper[4768]: W1124 17:49:24.864910 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:24 crc kubenswrapper[4768]: E1124 17:49:24.865003 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 17:49:24 crc kubenswrapper[4768]: W1124 17:49:24.918987 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:24 crc kubenswrapper[4768]: E1124 17:49:24.919088 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.940523 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"95daff4f04063e1c9db4e0dfc63a119a4ad136c47a453a705d4c481aaf03e014"} Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.940597 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"85638081d180f3f49a5865193eb7baf9777cafcbd197443feec23cc087f0e52d"} Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.940609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b5368b95b504f69098e8059eab5d10a29142319fedc02aa3421d2f133fa1dbee"} Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.940607 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.944297 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.944341 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.944351 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.945209 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"53fc36405a12358007f4b3e5aa6fd8cfa3d50864042eae28769c853b38e1a52e"} Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.945294 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.946817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.946914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.946978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.948117 4768 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0aed504cdce697a67257909347234d1d268731cfd4788665702d9f1fefd81fc9" exitCode=0 Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.948200 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0aed504cdce697a67257909347234d1d268731cfd4788665702d9f1fefd81fc9"} Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.948353 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.950148 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.950172 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.950181 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.954522 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a"} Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.954568 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce"} Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.954590 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.954598 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e"} Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.954612 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e"} Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.954624 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a"} Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.954612 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.955441 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.955470 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.955496 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.955751 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.955776 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:24 crc kubenswrapper[4768]: I1124 17:49:24.955786 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.084527 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.085600 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.085638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.085650 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.085674 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 17:49:25 crc kubenswrapper[4768]: E1124 17:49:25.086259 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Nov 24 17:49:25 crc kubenswrapper[4768]: W1124 17:49:25.334044 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 17:49:25 crc kubenswrapper[4768]: E1124 17:49:25.334173 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.959589 4768 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="41d7190571d28ba8a919c55ab72367fc821c76af3a484f2d846faf223b91ba10" exitCode=0 Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.959719 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.959741 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.959771 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"41d7190571d28ba8a919c55ab72367fc821c76af3a484f2d846faf223b91ba10"} Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.959818 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.959852 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.960620 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.960673 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.961063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.961097 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.961113 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.961716 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.961744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.961744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.961755 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.961771 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.961799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.961780 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.961845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:25 crc kubenswrapper[4768]: I1124 17:49:25.961892 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:26 crc kubenswrapper[4768]: I1124 17:49:26.967229 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9efe128c5c465a5e97ed3999c845aaf99f54ce8f8f284ef94e862849c4bd1440"} Nov 24 17:49:26 crc kubenswrapper[4768]: I1124 17:49:26.967337 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:26 crc kubenswrapper[4768]: I1124 17:49:26.967363 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:26 crc kubenswrapper[4768]: I1124 17:49:26.967359 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8765297eaac3b23102363c5a20bb8ba2adfe61b234cd89efe9f4a990ca64f775"} Nov 24 17:49:26 crc kubenswrapper[4768]: I1124 17:49:26.967524 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b5594e7a35900cb3a27abf0b6b52c8c5eb5dc6073fde777591827aa0b263d1fb"} Nov 24 17:49:26 crc kubenswrapper[4768]: I1124 17:49:26.967545 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d2638ec423b0ed84cb8f7fd9675411807c732a4bc0d6e7d225e7bc75d4eab440"} Nov 24 17:49:26 crc kubenswrapper[4768]: I1124 17:49:26.968356 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:26 crc kubenswrapper[4768]: I1124 17:49:26.968379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:26 crc kubenswrapper[4768]: I1124 17:49:26.968391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:26 crc kubenswrapper[4768]: I1124 17:49:26.968435 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:26 crc kubenswrapper[4768]: I1124 17:49:26.968452 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:26 crc kubenswrapper[4768]: I1124 17:49:26.968462 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:27 crc kubenswrapper[4768]: I1124 17:49:27.139990 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:27 crc kubenswrapper[4768]: I1124 17:49:27.972792 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"aa3b965619b00a1c06e5bbba266233972deaebef7329c7df8f9e8b281c15dc7f"} Nov 24 17:49:27 crc kubenswrapper[4768]: I1124 17:49:27.972864 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:27 crc kubenswrapper[4768]: I1124 17:49:27.972890 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:27 crc kubenswrapper[4768]: I1124 17:49:27.973816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:27 crc kubenswrapper[4768]: I1124 17:49:27.973853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:27 crc kubenswrapper[4768]: I1124 17:49:27.973862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:27 crc kubenswrapper[4768]: I1124 17:49:27.974777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:27 crc kubenswrapper[4768]: I1124 17:49:27.974802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:27 crc kubenswrapper[4768]: I1124 17:49:27.974810 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.280049 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.280304 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.281988 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.282054 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.282067 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.286635 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.287920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.287953 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.287965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.288014 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.975894 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.977053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.977106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:28 crc kubenswrapper[4768]: I1124 17:49:28.977129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:29 crc kubenswrapper[4768]: I1124 17:49:29.280163 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:29 crc kubenswrapper[4768]: I1124 17:49:29.280405 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:29 crc kubenswrapper[4768]: I1124 17:49:29.282041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:29 crc kubenswrapper[4768]: I1124 17:49:29.282105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:29 crc kubenswrapper[4768]: I1124 17:49:29.282132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:29 crc kubenswrapper[4768]: I1124 17:49:29.606756 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:29 crc kubenswrapper[4768]: I1124 17:49:29.607031 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:29 crc kubenswrapper[4768]: I1124 17:49:29.608742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:29 crc kubenswrapper[4768]: I1124 17:49:29.608803 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:29 crc kubenswrapper[4768]: I1124 17:49:29.608832 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:31 crc kubenswrapper[4768]: I1124 17:49:31.281018 4768 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 17:49:31 crc kubenswrapper[4768]: I1124 17:49:31.281090 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 17:49:31 crc kubenswrapper[4768]: E1124 17:49:31.979212 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.177479 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.177693 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.178934 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.178965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.178975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.184385 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.263852 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.264066 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.265391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.265439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.265455 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.928805 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.933173 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.987900 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.989159 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.989210 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:32 crc kubenswrapper[4768]: I1124 17:49:32.989228 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:33 crc kubenswrapper[4768]: I1124 17:49:33.990272 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:33 crc kubenswrapper[4768]: I1124 17:49:33.991731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:33 crc kubenswrapper[4768]: I1124 17:49:33.991787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:33 crc kubenswrapper[4768]: I1124 17:49:33.991804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:35 crc kubenswrapper[4768]: W1124 17:49:35.521859 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 17:49:35 crc kubenswrapper[4768]: I1124 17:49:35.522565 4768 trace.go:236] Trace[132831057]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 17:49:25.520) (total time: 10001ms): Nov 24 17:49:35 crc kubenswrapper[4768]: Trace[132831057]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (17:49:35.521) Nov 24 17:49:35 crc kubenswrapper[4768]: Trace[132831057]: [10.001560903s] [10.001560903s] END Nov 24 17:49:35 crc kubenswrapper[4768]: E1124 17:49:35.522613 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 17:49:35 crc kubenswrapper[4768]: I1124 17:49:35.852458 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 24 17:49:35 crc kubenswrapper[4768]: I1124 17:49:35.921052 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 17:49:35 crc kubenswrapper[4768]: I1124 17:49:35.921129 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 17:49:35 crc kubenswrapper[4768]: I1124 17:49:35.927080 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 17:49:35 crc kubenswrapper[4768]: I1124 17:49:35.927368 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 17:49:36 crc kubenswrapper[4768]: I1124 17:49:36.586916 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 17:49:36 crc kubenswrapper[4768]: I1124 17:49:36.587190 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:36 crc kubenswrapper[4768]: I1124 17:49:36.588468 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:36 crc kubenswrapper[4768]: I1124 17:49:36.588540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:36 crc kubenswrapper[4768]: I1124 17:49:36.588555 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:36 crc kubenswrapper[4768]: I1124 17:49:36.648723 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 17:49:36 crc kubenswrapper[4768]: I1124 17:49:36.997684 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:36 crc kubenswrapper[4768]: I1124 17:49:36.999209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:36 crc kubenswrapper[4768]: I1124 17:49:36.999254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:36 crc kubenswrapper[4768]: I1124 17:49:36.999262 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:37 crc kubenswrapper[4768]: I1124 17:49:37.015796 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 17:49:38 crc kubenswrapper[4768]: I1124 17:49:38.000802 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:38 crc kubenswrapper[4768]: I1124 17:49:38.001893 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:38 crc kubenswrapper[4768]: I1124 17:49:38.001954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:38 crc kubenswrapper[4768]: I1124 17:49:38.001971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.612057 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.612237 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.613643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.613699 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.613713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.616101 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.645105 4768 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.843562 4768 apiserver.go:52] "Watching apiserver" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.848776 4768 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.849080 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.849423 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.849464 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:39 crc kubenswrapper[4768]: E1124 17:49:39.849549 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.849568 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:39 crc kubenswrapper[4768]: E1124 17:49:39.849636 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.849801 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.849838 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.849858 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 17:49:39 crc kubenswrapper[4768]: E1124 17:49:39.849917 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.851265 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.851924 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.852087 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.852359 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.852507 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.852551 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.852806 4768 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.853474 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.854193 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.854578 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.904030 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.916957 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.929072 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.940170 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.950153 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.962962 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.971972 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.981705 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:39 crc kubenswrapper[4768]: I1124 17:49:39.995098 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:40 crc kubenswrapper[4768]: I1124 17:49:40.018769 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 17:49:40 crc kubenswrapper[4768]: E1124 17:49:40.915867 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 17:49:40 crc kubenswrapper[4768]: I1124 17:49:40.919502 4768 trace.go:236] Trace[244410754]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 17:49:29.191) (total time: 11727ms): Nov 24 17:49:40 crc kubenswrapper[4768]: Trace[244410754]: ---"Objects listed" error: 11727ms (17:49:40.919) Nov 24 17:49:40 crc kubenswrapper[4768]: Trace[244410754]: [11.727977131s] [11.727977131s] END Nov 24 17:49:40 crc kubenswrapper[4768]: I1124 17:49:40.919530 4768 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 17:49:40 crc kubenswrapper[4768]: I1124 17:49:40.920226 4768 trace.go:236] Trace[2029706701]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 17:49:30.373) (total time: 10546ms): Nov 24 17:49:40 crc kubenswrapper[4768]: Trace[2029706701]: ---"Objects listed" error: 10546ms (17:49:40.920) Nov 24 17:49:40 crc kubenswrapper[4768]: Trace[2029706701]: [10.546617199s] [10.546617199s] END Nov 24 17:49:40 crc kubenswrapper[4768]: I1124 17:49:40.920261 4768 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 17:49:40 crc kubenswrapper[4768]: I1124 17:49:40.920892 4768 trace.go:236] Trace[129832183]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 17:49:29.802) (total time: 11118ms): Nov 24 17:49:40 crc kubenswrapper[4768]: Trace[129832183]: ---"Objects listed" error: 11118ms (17:49:40.920) Nov 24 17:49:40 crc kubenswrapper[4768]: Trace[129832183]: [11.118285022s] [11.118285022s] END Nov 24 17:49:40 crc kubenswrapper[4768]: I1124 17:49:40.920914 4768 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 17:49:40 crc kubenswrapper[4768]: E1124 17:49:40.921860 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 17:49:40 crc kubenswrapper[4768]: I1124 17:49:40.921937 4768 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 17:49:40 crc kubenswrapper[4768]: I1124 17:49:40.956401 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 17:49:40 crc kubenswrapper[4768]: I1124 17:49:40.968443 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:40 crc kubenswrapper[4768]: I1124 17:49:40.984620 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:40 crc kubenswrapper[4768]: I1124 17:49:40.996279 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.012023 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.022862 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.022921 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.022939 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.022956 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.022970 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.022997 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023013 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023029 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023050 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023072 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023087 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023106 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023129 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023152 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023172 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023186 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023200 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023219 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023235 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023250 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023244 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023265 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023283 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023298 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023313 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023354 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023370 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023385 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023402 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023416 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023433 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023437 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023448 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023466 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023497 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023534 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023513 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023553 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023648 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023694 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023719 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023743 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023764 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023787 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023805 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023824 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023846 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023865 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023864 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023882 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023906 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023925 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023943 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023960 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023934 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023980 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.023998 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024019 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024047 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024062 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024082 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024099 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024113 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024134 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024151 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024186 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024205 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024200 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024224 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024250 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024269 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024294 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024313 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024331 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024351 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024373 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024422 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024440 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024460 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024479 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024516 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024534 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024554 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024572 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024588 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024605 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024623 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024642 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024665 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024684 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024702 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024724 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024741 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024758 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024775 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024794 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024837 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024859 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024880 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024902 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024923 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024943 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024962 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024984 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025004 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025022 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025039 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025059 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025076 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025094 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025111 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025127 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025145 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025162 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025178 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025195 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025215 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025235 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025255 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025273 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025290 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025312 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025369 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025389 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025406 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025426 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025446 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025467 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025518 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025539 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025556 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025577 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025598 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025615 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025632 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025649 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025666 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025688 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025708 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025729 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025751 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025770 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025788 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025809 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025825 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025842 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025860 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025877 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025901 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025922 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025952 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025972 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025992 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026011 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026029 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026048 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026065 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026086 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026103 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026124 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026142 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026162 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026191 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026219 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026238 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026255 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026270 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026286 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026304 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026324 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026342 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026362 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026381 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026409 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026427 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026447 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026467 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026631 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026655 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026801 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026827 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026846 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024208 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024413 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.030144 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024593 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024614 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024733 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024773 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.024906 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025066 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025177 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025201 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025286 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025355 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025464 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025480 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025663 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025706 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025729 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025886 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.025913 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026068 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026090 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026111 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026287 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026380 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026541 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.026743 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.027059 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.027107 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.027446 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.027472 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.027729 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.027756 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.027795 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.028995 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.029087 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.029110 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.029127 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.029201 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.029212 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.029656 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.029992 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.030106 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.030345 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.030173 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.030549 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.030900 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.030934 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.030959 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.030988 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031011 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031039 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031065 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031091 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031115 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031169 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031191 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031214 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031240 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031264 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031286 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031307 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031327 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031346 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031370 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031388 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031444 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031477 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031515 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031542 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031579 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031609 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031641 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031663 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031684 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031707 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031730 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031751 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031780 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031802 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031888 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031941 4768 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031954 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031965 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031978 4768 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.031993 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.032003 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.032013 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.032023 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.032033 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.032043 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.032052 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.032062 4768 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.032076 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.033643 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.033679 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.033700 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.033834 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.038016 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.038219 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.038900 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.038927 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.039165 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.039225 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.039402 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.039533 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.039798 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.039870 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.039963 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.039986 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.040006 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.040178 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.040801 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.041073 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.041344 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.041517 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.041584 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.041949 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.042141 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.042278 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.042464 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.042615 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.042935 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.043234 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.043405 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.043405 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.043838 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.043841 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.043909 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.044088 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.044258 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.044372 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.044503 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.044594 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.044721 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.044802 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.044995 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.045197 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.045385 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.045402 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.046736 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.046972 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.047420 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.047477 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.049017 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.049376 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.049779 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.049808 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.049844 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.050258 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.051266 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.051345 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.051430 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.051784 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.052041 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.052049 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.052245 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.052726 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.052854 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.053093 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.053148 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.053250 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.053421 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.053530 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.055775 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.054055 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.054270 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.055893 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.054770 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.054877 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.054797 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.055061 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.055288 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.055414 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.056029 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.055515 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.055686 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.055710 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.056191 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.056289 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.056374 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.056587 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.056955 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.056998 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.057078 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.057231 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:49:41.557197066 +0000 UTC m=+20.417779023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.057370 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.057658 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.057711 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.057950 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.058015 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.058043 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.058115 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:41.558092069 +0000 UTC m=+20.418674056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.058923 4768 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.061788 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.062033 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.062620 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:41.562601255 +0000 UTC m=+20.423183022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.063395 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.065634 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.066498 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.067652 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.067785 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.067704 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.068142 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.068252 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.068693 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.068777 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:41.568754432 +0000 UTC m=+20.429336209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.069687 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.069977 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.070266 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.070318 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.070384 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.070592 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.071110 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.071765 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.073344 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.079004 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.085094 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.085150 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.085171 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.085252 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:41.585223175 +0000 UTC m=+20.445804952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.085665 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.088535 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.090173 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.090343 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.094155 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.097793 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.102125 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.104367 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.106809 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.107900 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.108907 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.117340 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.118629 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.119795 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.119963 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.121767 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.129095 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.129967 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.131967 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.135329 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.135797 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.135846 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.135944 4768 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.135965 4768 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.135980 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.135991 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136003 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136017 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136029 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136040 4768 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136044 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136051 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136091 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136110 4768 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136128 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136144 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136156 4768 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136169 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136183 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136196 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136209 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136221 4768 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136234 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136261 4768 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136275 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136286 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136297 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136310 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136321 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136334 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136348 4768 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136362 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136376 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136389 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136402 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136424 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136437 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136448 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136460 4768 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136472 4768 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136500 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136514 4768 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136526 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136539 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136551 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136562 4768 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136574 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136586 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136602 4768 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136614 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136626 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136649 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136670 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136682 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136697 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136710 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136722 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136734 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136746 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136759 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136770 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136781 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136793 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136802 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136812 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136821 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136831 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136840 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136848 4768 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136857 4768 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136865 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136874 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136883 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136892 4768 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136901 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136910 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136918 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136927 4768 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136935 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136944 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136953 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136962 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136971 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136980 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.136993 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137007 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137019 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137030 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137041 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137051 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137061 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137071 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137080 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137089 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137099 4768 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137108 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137116 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137125 4768 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137134 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137142 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137153 4768 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137161 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137171 4768 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137180 4768 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137189 4768 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137198 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137207 4768 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137216 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137224 4768 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137233 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137243 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137251 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137260 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137270 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137279 4768 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137290 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137299 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137308 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137317 4768 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137330 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137342 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137353 4768 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137365 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137380 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137390 4768 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137398 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137407 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137416 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137424 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137433 4768 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137446 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137455 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137463 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137472 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137500 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137513 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137524 4768 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137533 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137542 4768 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137551 4768 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137560 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137568 4768 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137576 4768 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137585 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137593 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137601 4768 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137609 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137618 4768 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137626 4768 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137638 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137646 4768 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137656 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137667 4768 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137678 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137690 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137703 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137714 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137725 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137735 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137743 4768 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137751 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137760 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137778 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137786 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.137794 4768 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.140915 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.143909 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.145477 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.145692 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.147936 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.148209 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.148655 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xdbcm"] Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.148881 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.149234 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xdbcm" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.149787 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.150604 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.151100 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.151763 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.153400 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.153566 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.153676 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.154080 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.154264 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.154321 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.154469 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.179942 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.181600 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.189767 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.189920 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.233707 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239011 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv9ll\" (UniqueName: \"kubernetes.io/projected/401a0505-4a0c-4407-a38d-fe41e14b4d2a-kube-api-access-hv9ll\") pod \"node-resolver-xdbcm\" (UID: \"401a0505-4a0c-4407-a38d-fe41e14b4d2a\") " pod="openshift-dns/node-resolver-xdbcm" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239066 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/401a0505-4a0c-4407-a38d-fe41e14b4d2a-hosts-file\") pod \"node-resolver-xdbcm\" (UID: \"401a0505-4a0c-4407-a38d-fe41e14b4d2a\") " pod="openshift-dns/node-resolver-xdbcm" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239162 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239196 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239207 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239237 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239250 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239263 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239276 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239288 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239319 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239330 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239341 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239353 4768 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239365 4768 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239395 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239407 4768 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239420 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239431 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239442 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.239452 4768 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.261878 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.281379 4768 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.281447 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.283237 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.306058 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.325032 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.340014 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv9ll\" (UniqueName: \"kubernetes.io/projected/401a0505-4a0c-4407-a38d-fe41e14b4d2a-kube-api-access-hv9ll\") pod \"node-resolver-xdbcm\" (UID: \"401a0505-4a0c-4407-a38d-fe41e14b4d2a\") " pod="openshift-dns/node-resolver-xdbcm" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.340115 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/401a0505-4a0c-4407-a38d-fe41e14b4d2a-hosts-file\") pod \"node-resolver-xdbcm\" (UID: \"401a0505-4a0c-4407-a38d-fe41e14b4d2a\") " pod="openshift-dns/node-resolver-xdbcm" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.340346 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/401a0505-4a0c-4407-a38d-fe41e14b4d2a-hosts-file\") pod \"node-resolver-xdbcm\" (UID: \"401a0505-4a0c-4407-a38d-fe41e14b4d2a\") " pod="openshift-dns/node-resolver-xdbcm" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.345877 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.360533 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.365929 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv9ll\" (UniqueName: \"kubernetes.io/projected/401a0505-4a0c-4407-a38d-fe41e14b4d2a-kube-api-access-hv9ll\") pod \"node-resolver-xdbcm\" (UID: \"401a0505-4a0c-4407-a38d-fe41e14b4d2a\") " pod="openshift-dns/node-resolver-xdbcm" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.366512 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.367253 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.376012 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 17:49:41 crc kubenswrapper[4768]: W1124 17:49:41.386421 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-70d7a4cfb3ebc5c471d19cd6be8293b9d47a9529bbd03696a777c706de2d1e56 WatchSource:0}: Error finding container 70d7a4cfb3ebc5c471d19cd6be8293b9d47a9529bbd03696a777c706de2d1e56: Status 404 returned error can't find the container with id 70d7a4cfb3ebc5c471d19cd6be8293b9d47a9529bbd03696a777c706de2d1e56 Nov 24 17:49:41 crc kubenswrapper[4768]: W1124 17:49:41.387586 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-19ff5360d50c17bc520f0daddd6e9455d23bd768687e87035da5e93ddf57cfd2 WatchSource:0}: Error finding container 19ff5360d50c17bc520f0daddd6e9455d23bd768687e87035da5e93ddf57cfd2: Status 404 returned error can't find the container with id 19ff5360d50c17bc520f0daddd6e9455d23bd768687e87035da5e93ddf57cfd2 Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.387757 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: W1124 17:49:41.405724 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-57467e7b629bc55171f8d9a34fb2f9a4a2267c86cbf206dc62e32e6dcd630faf WatchSource:0}: Error finding container 57467e7b629bc55171f8d9a34fb2f9a4a2267c86cbf206dc62e32e6dcd630faf: Status 404 returned error can't find the container with id 57467e7b629bc55171f8d9a34fb2f9a4a2267c86cbf206dc62e32e6dcd630faf Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.464581 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xdbcm" Nov 24 17:49:41 crc kubenswrapper[4768]: W1124 17:49:41.477661 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod401a0505_4a0c_4407_a38d_fe41e14b4d2a.slice/crio-cfcc545b8c6c556b500b73f200668d43d147f0303362c98a03a94d666502aef1 WatchSource:0}: Error finding container cfcc545b8c6c556b500b73f200668d43d147f0303362c98a03a94d666502aef1: Status 404 returned error can't find the container with id cfcc545b8c6c556b500b73f200668d43d147f0303362c98a03a94d666502aef1 Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.543839 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-vssnl"] Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.544081 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ljwzj"] Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.544293 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.544425 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.545570 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-6x87x"] Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.546106 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.549186 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.549205 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.549594 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.549801 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.549972 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.550242 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.550289 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.550301 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.550496 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.550478 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.550644 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.551160 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.561609 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.569167 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.577130 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.587387 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.596139 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.605292 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.619188 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.629857 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644244 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644321 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-var-lib-kubelet\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644342 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5sbg\" (UniqueName: \"kubernetes.io/projected/423ac327-22e2-4cc9-ba57-a1b2fc6f4bda-kube-api-access-z5sbg\") pod \"machine-config-daemon-ljwzj\" (UID: \"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\") " pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644360 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-var-lib-cni-bin\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.644414 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:49:42.64439094 +0000 UTC m=+21.504972717 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644499 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644527 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-multus-cni-dir\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644543 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-run-k8s-cni-cncf-io\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644561 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-cni-binary-copy\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644580 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644607 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/733afdb8-b6a5-40b5-8164-5885baf3eceb-system-cni-dir\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.644622 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644634 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-run-netns\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.644648 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644658 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-run-multus-certs\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.644662 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644681 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-cnibin\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644703 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-multus-daemon-config\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.644631 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.644732 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:42.644713018 +0000 UTC m=+21.505294885 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644754 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/733afdb8-b6a5-40b5-8164-5885baf3eceb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.644764 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:42.644752749 +0000 UTC m=+21.505334616 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644781 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/423ac327-22e2-4cc9-ba57-a1b2fc6f4bda-proxy-tls\") pod \"machine-config-daemon-ljwzj\" (UID: \"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\") " pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644801 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hk7\" (UniqueName: \"kubernetes.io/projected/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-kube-api-access-54hk7\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644821 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/733afdb8-b6a5-40b5-8164-5885baf3eceb-cnibin\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644842 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/733afdb8-b6a5-40b5-8164-5885baf3eceb-cni-binary-copy\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644864 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-system-cni-dir\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644889 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/423ac327-22e2-4cc9-ba57-a1b2fc6f4bda-mcd-auth-proxy-config\") pod \"machine-config-daemon-ljwzj\" (UID: \"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\") " pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644911 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-etc-kubernetes\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644939 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/423ac327-22e2-4cc9-ba57-a1b2fc6f4bda-rootfs\") pod \"machine-config-daemon-ljwzj\" (UID: \"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\") " pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.644983 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/733afdb8-b6a5-40b5-8164-5885baf3eceb-os-release\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.645008 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkz2q\" (UniqueName: \"kubernetes.io/projected/733afdb8-b6a5-40b5-8164-5885baf3eceb-kube-api-access-lkz2q\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.645037 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/733afdb8-b6a5-40b5-8164-5885baf3eceb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.645059 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-var-lib-cni-multus\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.645090 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.645112 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-os-release\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.645131 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-multus-socket-dir-parent\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.645148 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-hostroot\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.645166 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-multus-conf-dir\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.645192 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.645235 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.645255 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.645264 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.645275 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.645295 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:42.645285483 +0000 UTC m=+21.505867350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.645312 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:42.645305223 +0000 UTC m=+21.505887070 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.647119 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.661839 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.673038 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.680913 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.689673 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.699772 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.712623 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.732257 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.742701 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746037 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-cni-binary-copy\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746094 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/733afdb8-b6a5-40b5-8164-5885baf3eceb-system-cni-dir\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746119 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-run-netns\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746144 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-run-multus-certs\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746167 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-cnibin\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746193 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-multus-daemon-config\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746201 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/733afdb8-b6a5-40b5-8164-5885baf3eceb-system-cni-dir\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746215 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/733afdb8-b6a5-40b5-8164-5885baf3eceb-cnibin\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746237 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/733afdb8-b6a5-40b5-8164-5885baf3eceb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746254 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/423ac327-22e2-4cc9-ba57-a1b2fc6f4bda-proxy-tls\") pod \"machine-config-daemon-ljwzj\" (UID: \"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\") " pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746273 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54hk7\" (UniqueName: \"kubernetes.io/projected/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-kube-api-access-54hk7\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746298 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/733afdb8-b6a5-40b5-8164-5885baf3eceb-cni-binary-copy\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746318 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-system-cni-dir\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746305 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-cnibin\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746340 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/423ac327-22e2-4cc9-ba57-a1b2fc6f4bda-mcd-auth-proxy-config\") pod \"machine-config-daemon-ljwzj\" (UID: \"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\") " pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746361 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-etc-kubernetes\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746367 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-run-netns\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746381 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/423ac327-22e2-4cc9-ba57-a1b2fc6f4bda-rootfs\") pod \"machine-config-daemon-ljwzj\" (UID: \"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\") " pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746404 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/733afdb8-b6a5-40b5-8164-5885baf3eceb-os-release\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746423 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-run-multus-certs\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746426 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkz2q\" (UniqueName: \"kubernetes.io/projected/733afdb8-b6a5-40b5-8164-5885baf3eceb-kube-api-access-lkz2q\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746463 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/733afdb8-b6a5-40b5-8164-5885baf3eceb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746528 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-var-lib-cni-multus\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746561 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-multus-socket-dir-parent\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-hostroot\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746638 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-multus-conf-dir\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746694 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-os-release\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746729 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-var-lib-kubelet\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746783 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5sbg\" (UniqueName: \"kubernetes.io/projected/423ac327-22e2-4cc9-ba57-a1b2fc6f4bda-kube-api-access-z5sbg\") pod \"machine-config-daemon-ljwzj\" (UID: \"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\") " pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746808 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-var-lib-cni-bin\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746851 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-multus-cni-dir\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746875 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-run-k8s-cni-cncf-io\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746947 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-run-k8s-cni-cncf-io\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.746959 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-cni-binary-copy\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747160 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-var-lib-cni-multus\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747224 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-multus-socket-dir-parent\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747261 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-hostroot\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747294 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-multus-conf-dir\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747334 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-etc-kubernetes\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747384 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-multus-daemon-config\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747395 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-os-release\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747430 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-var-lib-kubelet\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747435 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/423ac327-22e2-4cc9-ba57-a1b2fc6f4bda-rootfs\") pod \"machine-config-daemon-ljwzj\" (UID: \"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\") " pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747459 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-system-cni-dir\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747530 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/733afdb8-b6a5-40b5-8164-5885baf3eceb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747612 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-host-var-lib-cni-bin\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747624 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/733afdb8-b6a5-40b5-8164-5885baf3eceb-os-release\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747667 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/733afdb8-b6a5-40b5-8164-5885baf3eceb-cnibin\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747770 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-multus-cni-dir\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.747773 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/733afdb8-b6a5-40b5-8164-5885baf3eceb-cni-binary-copy\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.748574 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/423ac327-22e2-4cc9-ba57-a1b2fc6f4bda-mcd-auth-proxy-config\") pod \"machine-config-daemon-ljwzj\" (UID: \"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\") " pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.748702 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/733afdb8-b6a5-40b5-8164-5885baf3eceb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.754005 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/423ac327-22e2-4cc9-ba57-a1b2fc6f4bda-proxy-tls\") pod \"machine-config-daemon-ljwzj\" (UID: \"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\") " pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.764129 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.767165 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5sbg\" (UniqueName: \"kubernetes.io/projected/423ac327-22e2-4cc9-ba57-a1b2fc6f4bda-kube-api-access-z5sbg\") pod \"machine-config-daemon-ljwzj\" (UID: \"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\") " pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.767292 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkz2q\" (UniqueName: \"kubernetes.io/projected/733afdb8-b6a5-40b5-8164-5885baf3eceb-kube-api-access-lkz2q\") pod \"multus-additional-cni-plugins-6x87x\" (UID: \"733afdb8-b6a5-40b5-8164-5885baf3eceb\") " pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.767315 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54hk7\" (UniqueName: \"kubernetes.io/projected/895270a4-4f6a-4be4-9701-8a0f9cbf73d7-kube-api-access-54hk7\") pod \"multus-vssnl\" (UID: \"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\") " pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.778073 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.792436 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.805258 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.863009 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.871765 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vssnl" Nov 24 17:49:41 crc kubenswrapper[4768]: W1124 17:49:41.877328 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod423ac327_22e2_4cc9_ba57_a1b2fc6f4bda.slice/crio-6c7e1d7db6da38f53b89170eb021cd74489d9ffa40c49d69152a74647cf653a7 WatchSource:0}: Error finding container 6c7e1d7db6da38f53b89170eb021cd74489d9ffa40c49d69152a74647cf653a7: Status 404 returned error can't find the container with id 6c7e1d7db6da38f53b89170eb021cd74489d9ffa40c49d69152a74647cf653a7 Nov 24 17:49:41 crc kubenswrapper[4768]: W1124 17:49:41.882744 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod895270a4_4f6a_4be4_9701_8a0f9cbf73d7.slice/crio-db918dbd9f197f1e42a0c847ced37a02cb3062db8d6021ee025ee8f63d68abcc WatchSource:0}: Error finding container db918dbd9f197f1e42a0c847ced37a02cb3062db8d6021ee025ee8f63d68abcc: Status 404 returned error can't find the container with id db918dbd9f197f1e42a0c847ced37a02cb3062db8d6021ee025ee8f63d68abcc Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.893581 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w2gjr"] Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.894547 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.896410 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.896631 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.896779 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.896911 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.896949 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.897355 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.897357 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.897462 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.897540 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.897583 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:49:41 crc kubenswrapper[4768]: E1124 17:49:41.897643 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.900293 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.900468 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6x87x" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.900972 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.904179 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.905064 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.906574 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.908280 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.909704 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.910200 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.910941 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.912552 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.913191 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.914353 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.915029 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.916253 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.917050 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.917941 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.918428 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.919338 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.919850 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.920376 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.920840 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.921125 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.921724 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.922253 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.923105 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.923767 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.924543 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.925138 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.925536 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: W1124 17:49:41.925961 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod733afdb8_b6a5_40b5_8164_5885baf3eceb.slice/crio-d3b4f06103d7dc143c483a6eed74a6267e04efd982c6c3d6ed32a9f1b9cb5d32 WatchSource:0}: Error finding container d3b4f06103d7dc143c483a6eed74a6267e04efd982c6c3d6ed32a9f1b9cb5d32: Status 404 returned error can't find the container with id d3b4f06103d7dc143c483a6eed74a6267e04efd982c6c3d6ed32a9f1b9cb5d32 Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.926527 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.927605 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.928093 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.928715 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.929577 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.930101 4768 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.930194 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.932308 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.933592 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.934083 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.935004 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.935751 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.936625 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.937736 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.938647 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.940394 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.941125 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.942629 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.943933 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.944720 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.945910 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.946550 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.947129 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.947724 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.948633 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.949875 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.950594 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.951251 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.954562 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.955544 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.956181 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.957457 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.967206 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.977192 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:41 crc kubenswrapper[4768]: I1124 17:49:41.989198 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.001381 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.011662 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"57467e7b629bc55171f8d9a34fb2f9a4a2267c86cbf206dc62e32e6dcd630faf"} Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.013163 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004"} Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.013192 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538"} Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.013202 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"70d7a4cfb3ebc5c471d19cd6be8293b9d47a9529bbd03696a777c706de2d1e56"} Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.015460 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.024831 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xdbcm" event={"ID":"401a0505-4a0c-4407-a38d-fe41e14b4d2a","Type":"ContainerStarted","Data":"1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95"} Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.024887 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xdbcm" event={"ID":"401a0505-4a0c-4407-a38d-fe41e14b4d2a","Type":"ContainerStarted","Data":"cfcc545b8c6c556b500b73f200668d43d147f0303362c98a03a94d666502aef1"} Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.026058 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.035168 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183"} Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.035275 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"19ff5360d50c17bc520f0daddd6e9455d23bd768687e87035da5e93ddf57cfd2"} Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.037382 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" event={"ID":"733afdb8-b6a5-40b5-8164-5885baf3eceb","Type":"ContainerStarted","Data":"d3b4f06103d7dc143c483a6eed74a6267e04efd982c6c3d6ed32a9f1b9cb5d32"} Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.042115 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vssnl" event={"ID":"895270a4-4f6a-4be4-9701-8a0f9cbf73d7","Type":"ContainerStarted","Data":"db918dbd9f197f1e42a0c847ced37a02cb3062db8d6021ee025ee8f63d68abcc"} Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.046168 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.046438 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"6c7e1d7db6da38f53b89170eb021cd74489d9ffa40c49d69152a74647cf653a7"} Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.048672 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-systemd-units\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.048716 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-log-socket\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.048741 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-run-ovn-kubernetes\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.048764 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-etc-openvswitch\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.048786 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-openvswitch\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.048804 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.048886 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovn-node-metrics-cert\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.048911 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovnkube-script-lib\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.048975 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovnkube-config\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.049008 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-systemd\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.049029 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dhc7\" (UniqueName: \"kubernetes.io/projected/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-kube-api-access-4dhc7\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.049050 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-kubelet\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.049077 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-slash\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.049100 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-run-netns\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.049132 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-ovn\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.049168 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-env-overrides\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.049243 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-var-lib-openvswitch\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.049282 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-cni-bin\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.049310 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-node-log\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.049330 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-cni-netd\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.062103 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.076343 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.096353 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.115336 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.133572 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150075 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-slash\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150121 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-run-netns\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150144 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-ovn\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150166 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-env-overrides\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150209 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-var-lib-openvswitch\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150231 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-cni-bin\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150241 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-ovn\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150256 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-node-log\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150383 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-cni-netd\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150289 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-var-lib-openvswitch\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150432 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-cni-netd\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150304 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-cni-bin\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150318 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-run-netns\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150304 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-node-log\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150531 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-systemd-units\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150546 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-log-socket\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150578 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-systemd-units\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150620 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-run-ovn-kubernetes\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150637 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.151200 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-etc-openvswitch\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.151450 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-openvswitch\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.151468 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovn-node-metrics-cert\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.151513 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-openvswitch\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150700 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.151352 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-etc-openvswitch\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150779 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-env-overrides\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150682 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-run-ovn-kubernetes\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150185 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-slash\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.151539 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovnkube-script-lib\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.151657 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovnkube-config\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.151738 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-kubelet\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.151763 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-systemd\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.151787 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dhc7\" (UniqueName: \"kubernetes.io/projected/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-kube-api-access-4dhc7\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.150655 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-log-socket\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.151852 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-systemd\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.151952 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-kubelet\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.152541 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovnkube-config\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.159499 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovnkube-script-lib\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.160698 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovn-node-metrics-cert\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.169303 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.188253 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dhc7\" (UniqueName: \"kubernetes.io/projected/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-kube-api-access-4dhc7\") pod \"ovnkube-node-w2gjr\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.190028 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.201809 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.206946 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.215661 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: W1124 17:49:42.218240 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod938bbdd8_09f5_44f8_a9a5_3b13c0f8a2cb.slice/crio-b9acfe70cba1fea9c53ebdb3678d91b368181acca63b937ef61622fe45e65ccb WatchSource:0}: Error finding container b9acfe70cba1fea9c53ebdb3678d91b368181acca63b937ef61622fe45e65ccb: Status 404 returned error can't find the container with id b9acfe70cba1fea9c53ebdb3678d91b368181acca63b937ef61622fe45e65ccb Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.238594 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.253459 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.266940 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.288167 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.300776 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.315603 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.329159 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.343224 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.353310 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.386349 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.422984 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.462506 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.501260 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.542552 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.591784 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.655199 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.655479 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:49:44.65543634 +0000 UTC m=+23.516018117 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.655611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.655684 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.655735 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:42 crc kubenswrapper[4768]: I1124 17:49:42.655775 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.655909 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.655970 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:44.655962273 +0000 UTC m=+23.516544050 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.656040 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.656071 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.656244 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.656058 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.656354 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.656373 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.656166 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:44.656139468 +0000 UTC m=+23.516721245 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.656451 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:44.656428815 +0000 UTC m=+23.517010782 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.656324 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:42 crc kubenswrapper[4768]: E1124 17:49:42.656520 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:44.656509787 +0000 UTC m=+23.517091764 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.051365 4768 generic.go:334] "Generic (PLEG): container finished" podID="733afdb8-b6a5-40b5-8164-5885baf3eceb" containerID="a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec" exitCode=0 Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.051464 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" event={"ID":"733afdb8-b6a5-40b5-8164-5885baf3eceb","Type":"ContainerDied","Data":"a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec"} Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.054639 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vssnl" event={"ID":"895270a4-4f6a-4be4-9701-8a0f9cbf73d7","Type":"ContainerStarted","Data":"e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462"} Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.057630 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c"} Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.057663 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50"} Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.058803 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574" exitCode=0 Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.058845 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574"} Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.058872 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"b9acfe70cba1fea9c53ebdb3678d91b368181acca63b937ef61622fe45e65ccb"} Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.071569 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.085769 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.105839 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.121924 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.136889 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.149155 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.160850 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.182812 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.199076 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.219870 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.235064 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.245867 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.258877 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.269952 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.287948 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.303048 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.316410 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.335564 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.352327 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.392068 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.426921 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.476216 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.516601 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.560941 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:43Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.898193 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.898246 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:43 crc kubenswrapper[4768]: E1124 17:49:43.898962 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:49:43 crc kubenswrapper[4768]: E1124 17:49:43.899030 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:49:43 crc kubenswrapper[4768]: I1124 17:49:43.898281 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:43 crc kubenswrapper[4768]: E1124 17:49:43.899231 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.064737 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" event={"ID":"733afdb8-b6a5-40b5-8164-5885baf3eceb","Type":"ContainerStarted","Data":"0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44"} Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.068710 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec"} Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.068847 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8"} Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.068962 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6"} Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.069052 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369"} Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.081590 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.092174 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.106230 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.120770 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.136904 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.153198 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.167513 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.189974 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.204638 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.219639 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.234730 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.246758 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.678426 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.678655 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:49:48.678635348 +0000 UTC m=+27.539217125 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.678884 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.678972 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.679027 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.679101 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.679263 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.679288 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.679302 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.679359 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:48.679342116 +0000 UTC m=+27.539923893 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.679894 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.679946 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:48.679935111 +0000 UTC m=+27.540516888 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.680022 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.680044 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.680054 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.680091 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:48.680080375 +0000 UTC m=+27.540662152 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.680144 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:49:44 crc kubenswrapper[4768]: E1124 17:49:44.680180 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:48.680171037 +0000 UTC m=+27.540752804 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.834294 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-m7zct"] Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.834976 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m7zct" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.837727 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.837960 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.838102 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.838829 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.849863 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.861269 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.873081 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.905612 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.940384 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.958072 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.971326 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.982760 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb6z6\" (UniqueName: \"kubernetes.io/projected/c9ba241e-dd35-4128-a0e2-ee818cf1576f-kube-api-access-zb6z6\") pod \"node-ca-m7zct\" (UID: \"c9ba241e-dd35-4128-a0e2-ee818cf1576f\") " pod="openshift-image-registry/node-ca-m7zct" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.982842 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c9ba241e-dd35-4128-a0e2-ee818cf1576f-serviceca\") pod \"node-ca-m7zct\" (UID: \"c9ba241e-dd35-4128-a0e2-ee818cf1576f\") " pod="openshift-image-registry/node-ca-m7zct" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.982871 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9ba241e-dd35-4128-a0e2-ee818cf1576f-host\") pod \"node-ca-m7zct\" (UID: \"c9ba241e-dd35-4128-a0e2-ee818cf1576f\") " pod="openshift-image-registry/node-ca-m7zct" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.984577 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:44 crc kubenswrapper[4768]: I1124 17:49:44.998371 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.011655 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.023126 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.038274 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.051406 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.074179 4768 generic.go:334] "Generic (PLEG): container finished" podID="733afdb8-b6a5-40b5-8164-5885baf3eceb" containerID="0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44" exitCode=0 Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.074257 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" event={"ID":"733afdb8-b6a5-40b5-8164-5885baf3eceb","Type":"ContainerDied","Data":"0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44"} Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.078849 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f"} Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.078888 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3"} Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.080568 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7"} Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.084247 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c9ba241e-dd35-4128-a0e2-ee818cf1576f-serviceca\") pod \"node-ca-m7zct\" (UID: \"c9ba241e-dd35-4128-a0e2-ee818cf1576f\") " pod="openshift-image-registry/node-ca-m7zct" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.084304 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9ba241e-dd35-4128-a0e2-ee818cf1576f-host\") pod \"node-ca-m7zct\" (UID: \"c9ba241e-dd35-4128-a0e2-ee818cf1576f\") " pod="openshift-image-registry/node-ca-m7zct" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.084360 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb6z6\" (UniqueName: \"kubernetes.io/projected/c9ba241e-dd35-4128-a0e2-ee818cf1576f-kube-api-access-zb6z6\") pod \"node-ca-m7zct\" (UID: \"c9ba241e-dd35-4128-a0e2-ee818cf1576f\") " pod="openshift-image-registry/node-ca-m7zct" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.084663 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9ba241e-dd35-4128-a0e2-ee818cf1576f-host\") pod \"node-ca-m7zct\" (UID: \"c9ba241e-dd35-4128-a0e2-ee818cf1576f\") " pod="openshift-image-registry/node-ca-m7zct" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.085465 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c9ba241e-dd35-4128-a0e2-ee818cf1576f-serviceca\") pod \"node-ca-m7zct\" (UID: \"c9ba241e-dd35-4128-a0e2-ee818cf1576f\") " pod="openshift-image-registry/node-ca-m7zct" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.093962 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.103402 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb6z6\" (UniqueName: \"kubernetes.io/projected/c9ba241e-dd35-4128-a0e2-ee818cf1576f-kube-api-access-zb6z6\") pod \"node-ca-m7zct\" (UID: \"c9ba241e-dd35-4128-a0e2-ee818cf1576f\") " pod="openshift-image-registry/node-ca-m7zct" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.114938 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.129151 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.143301 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.154765 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m7zct" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.157218 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.178362 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.190547 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.205962 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.219647 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.234004 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.246295 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.257037 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.269063 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.281774 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.295311 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.305753 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.326098 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.366191 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.405640 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.443702 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.486582 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.529104 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.570763 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.609882 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.658217 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.690316 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.898378 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.898557 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:45 crc kubenswrapper[4768]: I1124 17:49:45.898620 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:45 crc kubenswrapper[4768]: E1124 17:49:45.898630 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:49:45 crc kubenswrapper[4768]: E1124 17:49:45.898805 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:49:45 crc kubenswrapper[4768]: E1124 17:49:45.898983 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.084279 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m7zct" event={"ID":"c9ba241e-dd35-4128-a0e2-ee818cf1576f","Type":"ContainerStarted","Data":"0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b"} Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.084330 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m7zct" event={"ID":"c9ba241e-dd35-4128-a0e2-ee818cf1576f","Type":"ContainerStarted","Data":"9ce7b0acba47a637648c714cd83876f9ceb6c2fafc550942f2ad469b8c47124a"} Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.086400 4768 generic.go:334] "Generic (PLEG): container finished" podID="733afdb8-b6a5-40b5-8164-5885baf3eceb" containerID="6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57" exitCode=0 Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.086938 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" event={"ID":"733afdb8-b6a5-40b5-8164-5885baf3eceb","Type":"ContainerDied","Data":"6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57"} Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.106300 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.121804 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.146582 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.159394 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.173125 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.183127 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.198608 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.212888 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.226108 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.237212 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.249158 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.262190 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.274138 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.289900 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.304702 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.325155 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.364596 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.404828 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.448960 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.486652 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.526527 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.566031 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.607029 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.647232 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.690093 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:46 crc kubenswrapper[4768]: I1124 17:49:46.727333 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:46Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.091938 4768 generic.go:334] "Generic (PLEG): container finished" podID="733afdb8-b6a5-40b5-8164-5885baf3eceb" containerID="1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19" exitCode=0 Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.092026 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" event={"ID":"733afdb8-b6a5-40b5-8164-5885baf3eceb","Type":"ContainerDied","Data":"1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19"} Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.096578 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006"} Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.108836 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.121918 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.133317 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.147661 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.159524 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.175178 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.190119 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.205159 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.222401 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.235478 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.250144 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.265029 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.278529 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.322029 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.324465 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.324545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.324561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.324690 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.330997 4768 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.331414 4768 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.332783 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.332809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.332820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.332833 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.332844 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:47Z","lastTransitionTime":"2025-11-24T17:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:47 crc kubenswrapper[4768]: E1124 17:49:47.344696 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.348736 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.348785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.348802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.348844 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.348869 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:47Z","lastTransitionTime":"2025-11-24T17:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:47 crc kubenswrapper[4768]: E1124 17:49:47.361931 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.365381 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.365416 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.365425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.365439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.365451 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:47Z","lastTransitionTime":"2025-11-24T17:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:47 crc kubenswrapper[4768]: E1124 17:49:47.382234 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.385799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.385839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.385852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.385872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.385885 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:47Z","lastTransitionTime":"2025-11-24T17:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:47 crc kubenswrapper[4768]: E1124 17:49:47.398057 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.401926 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.401964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.401976 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.401994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.402006 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:47Z","lastTransitionTime":"2025-11-24T17:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:47 crc kubenswrapper[4768]: E1124 17:49:47.416235 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:47 crc kubenswrapper[4768]: E1124 17:49:47.416392 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.418260 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.418295 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.418306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.418323 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.418333 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:47Z","lastTransitionTime":"2025-11-24T17:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.520718 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.520758 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.520770 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.520790 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.520802 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:47Z","lastTransitionTime":"2025-11-24T17:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.623342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.623390 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.623406 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.623427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.623439 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:47Z","lastTransitionTime":"2025-11-24T17:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.725458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.725522 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.725533 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.725551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.725563 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:47Z","lastTransitionTime":"2025-11-24T17:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.827897 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.827930 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.827938 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.827954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.827964 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:47Z","lastTransitionTime":"2025-11-24T17:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.898085 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.898106 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:47 crc kubenswrapper[4768]: E1124 17:49:47.898227 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.898245 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:47 crc kubenswrapper[4768]: E1124 17:49:47.898439 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:49:47 crc kubenswrapper[4768]: E1124 17:49:47.898583 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.930754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.930813 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.930829 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.930852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:47 crc kubenswrapper[4768]: I1124 17:49:47.930869 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:47Z","lastTransitionTime":"2025-11-24T17:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.033892 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.033946 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.033957 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.033973 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.033985 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:48Z","lastTransitionTime":"2025-11-24T17:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.103797 4768 generic.go:334] "Generic (PLEG): container finished" podID="733afdb8-b6a5-40b5-8164-5885baf3eceb" containerID="e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f" exitCode=0 Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.103859 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" event={"ID":"733afdb8-b6a5-40b5-8164-5885baf3eceb","Type":"ContainerDied","Data":"e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f"} Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.124110 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.136357 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.136401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.136414 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.136433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.136444 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:48Z","lastTransitionTime":"2025-11-24T17:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.140309 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.156529 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.172319 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.186200 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.197609 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.210518 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.222966 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.238766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.238797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.238806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.238823 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.238834 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:48Z","lastTransitionTime":"2025-11-24T17:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.240811 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.255306 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.266544 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.280537 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.283908 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.286734 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.292052 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.296085 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.307423 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.322179 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.335450 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.341300 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.341331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.341340 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.341354 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.341364 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:48Z","lastTransitionTime":"2025-11-24T17:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.346272 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.360895 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.374809 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.386998 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.396125 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.407146 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.416768 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.429029 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.441268 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.444047 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.444084 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.444097 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.444115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.444126 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:48Z","lastTransitionTime":"2025-11-24T17:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.451126 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.462463 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:48Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.546798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.546838 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.546849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.546866 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.546878 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:48Z","lastTransitionTime":"2025-11-24T17:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.650651 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.650762 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.650787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.650821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.650847 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:48Z","lastTransitionTime":"2025-11-24T17:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.720350 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.720513 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.720568 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.720668 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:49:56.720634042 +0000 UTC m=+35.581215859 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.720752 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.720777 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.720818 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.720811 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.720814 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.720897 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.720926 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.720844 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.720969 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.720939 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:56.720910989 +0000 UTC m=+35.581492796 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.720994 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.721023 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:56.720999201 +0000 UTC m=+35.581581018 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.721080 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:56.721046942 +0000 UTC m=+35.581628759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:48 crc kubenswrapper[4768]: E1124 17:49:48.721120 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 17:49:56.721101703 +0000 UTC m=+35.581683520 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.755046 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.755099 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.755116 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.755140 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.755159 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:48Z","lastTransitionTime":"2025-11-24T17:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.858917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.859241 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.859253 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.859270 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.859281 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:48Z","lastTransitionTime":"2025-11-24T17:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.961460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.961547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.961566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.961590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:48 crc kubenswrapper[4768]: I1124 17:49:48.961607 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:48Z","lastTransitionTime":"2025-11-24T17:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.063817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.063850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.063861 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.063877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.063890 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:49Z","lastTransitionTime":"2025-11-24T17:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.114872 4768 generic.go:334] "Generic (PLEG): container finished" podID="733afdb8-b6a5-40b5-8164-5885baf3eceb" containerID="519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098" exitCode=0 Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.115732 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" event={"ID":"733afdb8-b6a5-40b5-8164-5885baf3eceb","Type":"ContainerDied","Data":"519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098"} Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.140516 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.156430 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.165885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.165924 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.165934 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.165953 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.165964 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:49Z","lastTransitionTime":"2025-11-24T17:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.175516 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.201756 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.218730 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.232841 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.251680 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.262762 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.273877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.273913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.273923 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.273937 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.273948 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:49Z","lastTransitionTime":"2025-11-24T17:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.276669 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.294157 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.313344 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.333662 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.351667 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.369625 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:49Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.376231 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.376271 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.376284 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.376301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.376313 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:49Z","lastTransitionTime":"2025-11-24T17:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.478708 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.478757 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.478767 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.478790 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.478803 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:49Z","lastTransitionTime":"2025-11-24T17:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.580975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.581044 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.581069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.581100 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.581122 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:49Z","lastTransitionTime":"2025-11-24T17:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.684761 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.684842 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.684866 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.684896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.684916 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:49Z","lastTransitionTime":"2025-11-24T17:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.787540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.787624 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.787648 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.787678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.787701 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:49Z","lastTransitionTime":"2025-11-24T17:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.890154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.890218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.890236 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.890261 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.890278 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:49Z","lastTransitionTime":"2025-11-24T17:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.897573 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.897616 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:49 crc kubenswrapper[4768]: E1124 17:49:49.897710 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:49:49 crc kubenswrapper[4768]: E1124 17:49:49.897812 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.897925 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:49 crc kubenswrapper[4768]: E1124 17:49:49.898097 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.993655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.993703 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.993717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.993737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:49 crc kubenswrapper[4768]: I1124 17:49:49.993752 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:49Z","lastTransitionTime":"2025-11-24T17:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.111118 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.111164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.111177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.111195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.111207 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:50Z","lastTransitionTime":"2025-11-24T17:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.122256 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1"} Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.122517 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.134672 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" event={"ID":"733afdb8-b6a5-40b5-8164-5885baf3eceb","Type":"ContainerStarted","Data":"0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8"} Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.146050 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.160423 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.161613 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.180336 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.195851 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.209585 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.213391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.213437 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.213449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.213466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.213479 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:50Z","lastTransitionTime":"2025-11-24T17:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.223747 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.235403 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.250291 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.266084 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.278285 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.288618 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.301187 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.315963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.316019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.316034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.316055 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.316069 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:50Z","lastTransitionTime":"2025-11-24T17:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.319741 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.336146 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.349719 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.363681 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.382439 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.395426 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.407416 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.417150 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.421402 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.421443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.421460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.421507 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.421526 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:50Z","lastTransitionTime":"2025-11-24T17:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.429624 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.442989 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.455313 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.467082 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.481424 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.497061 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.509748 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.520614 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:50Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.524221 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.524263 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.524277 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.524293 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.524303 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:50Z","lastTransitionTime":"2025-11-24T17:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.626261 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.626288 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.626296 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.626313 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.626321 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:50Z","lastTransitionTime":"2025-11-24T17:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.729636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.729689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.729706 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.729730 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.729748 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:50Z","lastTransitionTime":"2025-11-24T17:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.833921 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.833997 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.834011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.834028 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.834039 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:50Z","lastTransitionTime":"2025-11-24T17:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.936265 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.936328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.936345 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.936369 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:50 crc kubenswrapper[4768]: I1124 17:49:50.936386 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:50Z","lastTransitionTime":"2025-11-24T17:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.039407 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.039462 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.039476 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.039514 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.039528 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:51Z","lastTransitionTime":"2025-11-24T17:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.139148 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.140829 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.141698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.141762 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.141780 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.141803 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.141821 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:51Z","lastTransitionTime":"2025-11-24T17:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.171001 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.190112 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.203888 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.221883 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.237606 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.245314 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.245379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.245395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.245419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.245432 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:51Z","lastTransitionTime":"2025-11-24T17:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.255982 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.276350 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.291785 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.313207 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.334115 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.346788 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.348316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.348357 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.348370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.348389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.348402 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:51Z","lastTransitionTime":"2025-11-24T17:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.371193 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.383207 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.398692 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.408776 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.451377 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.451425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.451434 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.451450 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.451458 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:51Z","lastTransitionTime":"2025-11-24T17:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.555055 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.555100 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.555110 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.555129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.555141 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:51Z","lastTransitionTime":"2025-11-24T17:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.658182 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.658260 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.658281 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.658311 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.658335 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:51Z","lastTransitionTime":"2025-11-24T17:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.761160 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.761213 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.761230 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.761282 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.761301 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:51Z","lastTransitionTime":"2025-11-24T17:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.863455 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.863597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.863615 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.863643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.863658 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:51Z","lastTransitionTime":"2025-11-24T17:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.898397 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.898539 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:51 crc kubenswrapper[4768]: E1124 17:49:51.898752 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.898806 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:51 crc kubenswrapper[4768]: E1124 17:49:51.898903 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:49:51 crc kubenswrapper[4768]: E1124 17:49:51.899025 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.915986 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.926691 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.940203 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.949226 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.965730 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.965767 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.965779 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.965797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.965808 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:51Z","lastTransitionTime":"2025-11-24T17:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.972848 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:51 crc kubenswrapper[4768]: I1124 17:49:51.993847 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.009249 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:52Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.019829 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:52Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.031851 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:52Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.044690 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:52Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.065129 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:52Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.067866 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.067905 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.067917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.067935 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.067948 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:52Z","lastTransitionTime":"2025-11-24T17:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.078896 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:52Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.093158 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:52Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.110421 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:52Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.141675 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.170397 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.170697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.170775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.170851 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.170918 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:52Z","lastTransitionTime":"2025-11-24T17:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.273777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.273827 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.273839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.273861 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.273873 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:52Z","lastTransitionTime":"2025-11-24T17:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.376414 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.376693 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.376763 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.376830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.376890 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:52Z","lastTransitionTime":"2025-11-24T17:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.479234 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.479583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.479596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.479611 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.479620 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:52Z","lastTransitionTime":"2025-11-24T17:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.582545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.582587 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.582595 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.582611 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.582620 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:52Z","lastTransitionTime":"2025-11-24T17:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.684364 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.684400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.684409 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.684423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.684435 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:52Z","lastTransitionTime":"2025-11-24T17:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.787557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.787625 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.787634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.787651 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.787661 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:52Z","lastTransitionTime":"2025-11-24T17:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.890438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.890564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.890581 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.890601 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.890613 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:52Z","lastTransitionTime":"2025-11-24T17:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.993036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.993080 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.993091 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.993122 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:52 crc kubenswrapper[4768]: I1124 17:49:52.993141 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:52Z","lastTransitionTime":"2025-11-24T17:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.095854 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.095918 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.095934 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.095957 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.095972 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:53Z","lastTransitionTime":"2025-11-24T17:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.100919 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w"] Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.101361 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.103030 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.104932 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.119478 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.138532 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.147037 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/0.log" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.149770 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1" exitCode=1 Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.149838 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1"} Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.150481 4768 scope.go:117] "RemoveContainer" containerID="f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.157524 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.167111 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6cf1a20e-72eb-4519-a3fd-2b973853a250-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9nm7w\" (UID: \"6cf1a20e-72eb-4519-a3fd-2b973853a250\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.167172 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/6cf1a20e-72eb-4519-a3fd-2b973853a250-kube-api-access-vrxgd\") pod \"ovnkube-control-plane-749d76644c-9nm7w\" (UID: \"6cf1a20e-72eb-4519-a3fd-2b973853a250\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.167247 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6cf1a20e-72eb-4519-a3fd-2b973853a250-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9nm7w\" (UID: \"6cf1a20e-72eb-4519-a3fd-2b973853a250\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.167290 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6cf1a20e-72eb-4519-a3fd-2b973853a250-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9nm7w\" (UID: \"6cf1a20e-72eb-4519-a3fd-2b973853a250\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.185622 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.199376 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.199468 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.199487 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.199551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.199577 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:53Z","lastTransitionTime":"2025-11-24T17:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.201010 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.212884 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.225412 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.236848 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.250156 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.267630 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.267965 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6cf1a20e-72eb-4519-a3fd-2b973853a250-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9nm7w\" (UID: \"6cf1a20e-72eb-4519-a3fd-2b973853a250\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.268091 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6cf1a20e-72eb-4519-a3fd-2b973853a250-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9nm7w\" (UID: \"6cf1a20e-72eb-4519-a3fd-2b973853a250\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.268140 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/6cf1a20e-72eb-4519-a3fd-2b973853a250-kube-api-access-vrxgd\") pod \"ovnkube-control-plane-749d76644c-9nm7w\" (UID: \"6cf1a20e-72eb-4519-a3fd-2b973853a250\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.268200 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6cf1a20e-72eb-4519-a3fd-2b973853a250-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9nm7w\" (UID: \"6cf1a20e-72eb-4519-a3fd-2b973853a250\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.268724 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6cf1a20e-72eb-4519-a3fd-2b973853a250-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9nm7w\" (UID: \"6cf1a20e-72eb-4519-a3fd-2b973853a250\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.268981 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6cf1a20e-72eb-4519-a3fd-2b973853a250-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9nm7w\" (UID: \"6cf1a20e-72eb-4519-a3fd-2b973853a250\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.275660 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6cf1a20e-72eb-4519-a3fd-2b973853a250-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9nm7w\" (UID: \"6cf1a20e-72eb-4519-a3fd-2b973853a250\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.284647 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.285058 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/6cf1a20e-72eb-4519-a3fd-2b973853a250-kube-api-access-vrxgd\") pod \"ovnkube-control-plane-749d76644c-9nm7w\" (UID: \"6cf1a20e-72eb-4519-a3fd-2b973853a250\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.301251 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.302340 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.302367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.302375 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.302392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.302401 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:53Z","lastTransitionTime":"2025-11-24T17:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.313526 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.325964 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.340579 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.354741 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.371370 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.384192 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.396031 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.404892 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.404924 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.404932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.404948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.404958 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:53Z","lastTransitionTime":"2025-11-24T17:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.410806 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.417001 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.428380 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: W1124 17:49:53.431118 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cf1a20e_72eb_4519_a3fd_2b973853a250.slice/crio-889c1ba613d9bb6c892177470f5175896fed6421d79af04f71282ec6872dc737 WatchSource:0}: Error finding container 889c1ba613d9bb6c892177470f5175896fed6421d79af04f71282ec6872dc737: Status 404 returned error can't find the container with id 889c1ba613d9bb6c892177470f5175896fed6421d79af04f71282ec6872dc737 Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.440698 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.453203 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.471994 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.490047 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.507382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.507436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.507455 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.507482 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.507530 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:53Z","lastTransitionTime":"2025-11-24T17:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.513430 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:49:52Z\\\",\\\"message\\\":\\\" 6043 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 17:49:52.503410 6043 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 17:49:52.503541 6043 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 17:49:52.503575 6043 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 17:49:52.503948 6043 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 17:49:52.504031 6043 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 17:49:52.504056 6043 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 17:49:52.504064 6043 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 17:49:52.504090 6043 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 17:49:52.504090 6043 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 17:49:52.504098 6043 factory.go:656] Stopping watch factory\\\\nI1124 17:49:52.504120 6043 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 17:49:52.504144 6043 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.530050 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.543793 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.559356 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.571609 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:53Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.609630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.609658 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.609666 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.609682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.609690 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:53Z","lastTransitionTime":"2025-11-24T17:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.712372 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.712413 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.712422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.712438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.712447 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:53Z","lastTransitionTime":"2025-11-24T17:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.815150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.815199 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.815208 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.815224 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.815233 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:53Z","lastTransitionTime":"2025-11-24T17:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.897829 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.897827 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.897964 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:53 crc kubenswrapper[4768]: E1124 17:49:53.898113 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:49:53 crc kubenswrapper[4768]: E1124 17:49:53.898407 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:49:53 crc kubenswrapper[4768]: E1124 17:49:53.898571 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.917528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.917568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.917581 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.917596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:53 crc kubenswrapper[4768]: I1124 17:49:53.917605 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:53Z","lastTransitionTime":"2025-11-24T17:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.019984 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.020259 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.020274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.020291 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.020302 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:54Z","lastTransitionTime":"2025-11-24T17:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.122832 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.122876 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.122888 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.122905 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.122919 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:54Z","lastTransitionTime":"2025-11-24T17:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.157653 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" event={"ID":"6cf1a20e-72eb-4519-a3fd-2b973853a250","Type":"ContainerStarted","Data":"eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.157723 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" event={"ID":"6cf1a20e-72eb-4519-a3fd-2b973853a250","Type":"ContainerStarted","Data":"889c1ba613d9bb6c892177470f5175896fed6421d79af04f71282ec6872dc737"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.160086 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/0.log" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.168764 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.168621 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.186814 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.200429 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.213854 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.226383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.226431 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.226445 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.226466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.226484 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:54Z","lastTransitionTime":"2025-11-24T17:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.228531 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.240849 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.256280 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.268609 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.280974 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.298312 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.311943 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.327302 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.329241 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.329274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.329283 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.329295 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.329305 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:54Z","lastTransitionTime":"2025-11-24T17:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.343264 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.357051 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.368152 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.383754 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:49:52Z\\\",\\\"message\\\":\\\" 6043 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 17:49:52.503410 6043 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 17:49:52.503541 6043 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 17:49:52.503575 6043 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 17:49:52.503948 6043 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 17:49:52.504031 6043 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 17:49:52.504056 6043 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 17:49:52.504064 6043 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 17:49:52.504090 6043 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 17:49:52.504090 6043 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 17:49:52.504098 6043 factory.go:656] Stopping watch factory\\\\nI1124 17:49:52.504120 6043 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 17:49:52.504144 6043 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.431986 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.432039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.432054 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.432076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.432089 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:54Z","lastTransitionTime":"2025-11-24T17:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.534748 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.534787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.534800 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.534815 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.534826 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:54Z","lastTransitionTime":"2025-11-24T17:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.636950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.637016 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.637026 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.637042 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.637053 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:54Z","lastTransitionTime":"2025-11-24T17:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.738910 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.738948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.738957 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.738979 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.738989 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:54Z","lastTransitionTime":"2025-11-24T17:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.841146 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.841194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.841206 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.841225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.841241 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:54Z","lastTransitionTime":"2025-11-24T17:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.943662 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.943697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.943709 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.943723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:54 crc kubenswrapper[4768]: I1124 17:49:54.943734 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:54Z","lastTransitionTime":"2025-11-24T17:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.047388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.047439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.047448 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.047467 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.047479 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:55Z","lastTransitionTime":"2025-11-24T17:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.150595 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.150649 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.150667 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.150691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.150707 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:55Z","lastTransitionTime":"2025-11-24T17:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.173784 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" event={"ID":"6cf1a20e-72eb-4519-a3fd-2b973853a250","Type":"ContainerStarted","Data":"4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.176750 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/1.log" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.177673 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/0.log" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.180648 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857" exitCode=1 Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.180708 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.180807 4768 scope.go:117] "RemoveContainer" containerID="f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.181560 4768 scope.go:117] "RemoveContainer" containerID="52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857" Nov 24 17:49:55 crc kubenswrapper[4768]: E1124 17:49:55.181843 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.187799 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.205183 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.217942 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.230105 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.241574 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.258090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.258141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.258154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.258175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.258188 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:55Z","lastTransitionTime":"2025-11-24T17:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.262170 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.278979 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.302300 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:49:52Z\\\",\\\"message\\\":\\\" 6043 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 17:49:52.503410 6043 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 17:49:52.503541 6043 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 17:49:52.503575 6043 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 17:49:52.503948 6043 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 17:49:52.504031 6043 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 17:49:52.504056 6043 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 17:49:52.504064 6043 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 17:49:52.504090 6043 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 17:49:52.504090 6043 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 17:49:52.504098 6043 factory.go:656] Stopping watch factory\\\\nI1124 17:49:52.504120 6043 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 17:49:52.504144 6043 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.319865 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.327703 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hpd8h"] Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.328249 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:49:55 crc kubenswrapper[4768]: E1124 17:49:55.328327 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.339352 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.360400 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.361669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.361714 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.361729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.361754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.361769 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:55Z","lastTransitionTime":"2025-11-24T17:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.390824 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.390979 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvrbf\" (UniqueName: \"kubernetes.io/projected/b50668f2-0a0b-40f4-9a38-3df082cf931e-kube-api-access-dvrbf\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.393788 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.413863 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.426491 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.437606 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.448557 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.458571 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.464149 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.464188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.464204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.464223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.464237 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:55Z","lastTransitionTime":"2025-11-24T17:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.470916 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.484660 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.492366 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvrbf\" (UniqueName: \"kubernetes.io/projected/b50668f2-0a0b-40f4-9a38-3df082cf931e-kube-api-access-dvrbf\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.492454 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:49:55 crc kubenswrapper[4768]: E1124 17:49:55.492678 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:49:55 crc kubenswrapper[4768]: E1124 17:49:55.492782 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs podName:b50668f2-0a0b-40f4-9a38-3df082cf931e nodeName:}" failed. No retries permitted until 2025-11-24 17:49:55.992752135 +0000 UTC m=+34.853333952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs") pod "network-metrics-daemon-hpd8h" (UID: "b50668f2-0a0b-40f4-9a38-3df082cf931e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.496110 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.508367 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.514189 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvrbf\" (UniqueName: \"kubernetes.io/projected/b50668f2-0a0b-40f4-9a38-3df082cf931e-kube-api-access-dvrbf\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.524038 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.536956 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.550509 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.563609 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.566085 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.566110 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.566119 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.566133 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.566143 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:55Z","lastTransitionTime":"2025-11-24T17:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.576601 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.597802 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f66823072de0ddda5348e3750b86a5f586f7b6c3b6eac425f5ca8a5c8ff0f0a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:49:52Z\\\",\\\"message\\\":\\\" 6043 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 17:49:52.503410 6043 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 17:49:52.503541 6043 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 17:49:52.503575 6043 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 17:49:52.503948 6043 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 17:49:52.504031 6043 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 17:49:52.504056 6043 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 17:49:52.504064 6043 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 17:49:52.504090 6043 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 17:49:52.504090 6043 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 17:49:52.504098 6043 factory.go:656] Stopping watch factory\\\\nI1124 17:49:52.504120 6043 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 17:49:52.504144 6043 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:49:54Z\\\",\\\"message\\\":\\\"network=default are: map[]\\\\nI1124 17:49:54.952643 6195 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 17:49:54.952648 6195 services_controller.go:443] Built service openshift-controller-manager/controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.149\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1124 17:49:54.952649 6195 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}\\\\nI1124 17:49:54.952663 6195 services_controller.go:444] Built service openshift-controller-manager/controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 17:49:54.952672 6195 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.326013ms\\\\nF1124 17:49:54.952715 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.613358 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.627374 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.640539 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.653393 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.668436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.668473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.668481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.668519 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.668532 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:55Z","lastTransitionTime":"2025-11-24T17:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.770461 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.770528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.770539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.770559 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.770571 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:55Z","lastTransitionTime":"2025-11-24T17:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.872754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.872847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.872862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.872884 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.872899 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:55Z","lastTransitionTime":"2025-11-24T17:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.897713 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.897749 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.897792 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:55 crc kubenswrapper[4768]: E1124 17:49:55.897873 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:49:55 crc kubenswrapper[4768]: E1124 17:49:55.897962 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:49:55 crc kubenswrapper[4768]: E1124 17:49:55.898047 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.975442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.975527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.975549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.975571 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.975584 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:55Z","lastTransitionTime":"2025-11-24T17:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:55 crc kubenswrapper[4768]: I1124 17:49:55.999288 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:49:55 crc kubenswrapper[4768]: E1124 17:49:55.999445 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:49:55 crc kubenswrapper[4768]: E1124 17:49:55.999559 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs podName:b50668f2-0a0b-40f4-9a38-3df082cf931e nodeName:}" failed. No retries permitted until 2025-11-24 17:49:56.999537427 +0000 UTC m=+35.860119274 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs") pod "network-metrics-daemon-hpd8h" (UID: "b50668f2-0a0b-40f4-9a38-3df082cf931e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.077804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.077847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.077856 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.077872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.077882 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:56Z","lastTransitionTime":"2025-11-24T17:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.181566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.181617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.181634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.181654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.181668 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:56Z","lastTransitionTime":"2025-11-24T17:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.185235 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/1.log" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.198926 4768 scope.go:117] "RemoveContainer" containerID="52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857" Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.199221 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.219897 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.233925 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.250467 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.271980 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.285117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.285173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.285185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.285204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.285216 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:56Z","lastTransitionTime":"2025-11-24T17:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.287916 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.305319 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.325131 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.341174 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.358076 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.384859 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.388057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.388168 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.388185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.388217 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.388230 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:56Z","lastTransitionTime":"2025-11-24T17:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.402557 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.418130 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.433846 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.449282 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.463970 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.486341 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:49:54Z\\\",\\\"message\\\":\\\"network=default are: map[]\\\\nI1124 17:49:54.952643 6195 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 17:49:54.952648 6195 services_controller.go:443] Built service openshift-controller-manager/controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.149\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1124 17:49:54.952649 6195 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}\\\\nI1124 17:49:54.952663 6195 services_controller.go:444] Built service openshift-controller-manager/controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 17:49:54.952672 6195 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.326013ms\\\\nF1124 17:49:54.952715 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:56Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.490981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.491204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.491325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.491424 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.491534 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:56Z","lastTransitionTime":"2025-11-24T17:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.594102 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.594168 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.594187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.594214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.594233 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:56Z","lastTransitionTime":"2025-11-24T17:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.697308 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.697356 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.697370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.697388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.697400 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:56Z","lastTransitionTime":"2025-11-24T17:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.799693 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.799747 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.799766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.799792 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.799812 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:56Z","lastTransitionTime":"2025-11-24T17:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.808178 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.808297 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.808325 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.808349 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.808370 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.808455 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.808528 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:50:12.808515747 +0000 UTC m=+51.669097524 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.808583 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.808691 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:50:12.808659471 +0000 UTC m=+51.669241298 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.808725 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.808759 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.808776 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.808822 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 17:50:12.808806825 +0000 UTC m=+51.669388692 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.808833 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.808857 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.808876 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.808926 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 17:50:12.808909897 +0000 UTC m=+51.669491814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.809355 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:50:12.809340308 +0000 UTC m=+51.669922095 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.898216 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:49:56 crc kubenswrapper[4768]: E1124 17:49:56.898767 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.902914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.902974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.902996 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.903020 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:56 crc kubenswrapper[4768]: I1124 17:49:56.903040 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:56Z","lastTransitionTime":"2025-11-24T17:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.005594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.005678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.005704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.005739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.005764 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.010968 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:49:57 crc kubenswrapper[4768]: E1124 17:49:57.011141 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:49:57 crc kubenswrapper[4768]: E1124 17:49:57.011382 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs podName:b50668f2-0a0b-40f4-9a38-3df082cf931e nodeName:}" failed. No retries permitted until 2025-11-24 17:49:59.011356817 +0000 UTC m=+37.871938634 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs") pod "network-metrics-daemon-hpd8h" (UID: "b50668f2-0a0b-40f4-9a38-3df082cf931e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.108537 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.108591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.108606 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.108630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.108645 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.211236 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.211289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.211300 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.211321 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.211336 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.313687 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.313741 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.313756 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.313778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.313793 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.415965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.416011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.416023 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.416041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.416053 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.518710 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.518755 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.518767 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.518783 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.518795 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.621045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.621119 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.621129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.621145 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.621155 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.723581 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.723638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.723661 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.723686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.723702 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.782807 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.782850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.782862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.782879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.782892 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: E1124 17:49:57.798819 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.804343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.804416 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.804431 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.804456 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.804470 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: E1124 17:49:57.822762 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.827355 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.827388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.827397 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.827414 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.827423 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: E1124 17:49:57.844739 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.848429 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.848514 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.848527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.848549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.848562 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: E1124 17:49:57.861034 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.865063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.865118 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.865130 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.865152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.865165 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: E1124 17:49:57.878932 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 17:49:57 crc kubenswrapper[4768]: E1124 17:49:57.879122 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.880948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.881012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.881024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.881048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.881062 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.898437 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.898579 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:57 crc kubenswrapper[4768]: E1124 17:49:57.898637 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.898664 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:57 crc kubenswrapper[4768]: E1124 17:49:57.898769 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:49:57 crc kubenswrapper[4768]: E1124 17:49:57.898876 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.984444 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.984548 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.984566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.984591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:57 crc kubenswrapper[4768]: I1124 17:49:57.984608 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:57Z","lastTransitionTime":"2025-11-24T17:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.087469 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.087568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.087592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.087622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.087641 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:58Z","lastTransitionTime":"2025-11-24T17:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.190075 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.190119 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.190131 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.190150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.190162 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:58Z","lastTransitionTime":"2025-11-24T17:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.292876 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.292909 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.292918 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.292932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.292941 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:58Z","lastTransitionTime":"2025-11-24T17:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.395424 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.395478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.395509 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.395529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.395540 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:58Z","lastTransitionTime":"2025-11-24T17:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.498016 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.498100 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.498117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.498139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.498156 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:58Z","lastTransitionTime":"2025-11-24T17:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.601290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.601699 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.601891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.602036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.602167 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:58Z","lastTransitionTime":"2025-11-24T17:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.704953 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.705009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.705027 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.705049 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.705065 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:58Z","lastTransitionTime":"2025-11-24T17:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.808567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.808626 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.808646 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.808673 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.808691 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:58Z","lastTransitionTime":"2025-11-24T17:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.898308 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:49:58 crc kubenswrapper[4768]: E1124 17:49:58.898927 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.911609 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.911684 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.911700 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.911725 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:58 crc kubenswrapper[4768]: I1124 17:49:58.911744 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:58Z","lastTransitionTime":"2025-11-24T17:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.014303 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.014367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.014390 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.014418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.014439 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:59Z","lastTransitionTime":"2025-11-24T17:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.037389 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:49:59 crc kubenswrapper[4768]: E1124 17:49:59.037622 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:49:59 crc kubenswrapper[4768]: E1124 17:49:59.037722 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs podName:b50668f2-0a0b-40f4-9a38-3df082cf931e nodeName:}" failed. No retries permitted until 2025-11-24 17:50:03.037696286 +0000 UTC m=+41.898278093 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs") pod "network-metrics-daemon-hpd8h" (UID: "b50668f2-0a0b-40f4-9a38-3df082cf931e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.116881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.116950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.116968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.116993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.117008 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:59Z","lastTransitionTime":"2025-11-24T17:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.219904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.220045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.220068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.220095 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.220116 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:59Z","lastTransitionTime":"2025-11-24T17:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.322855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.322899 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.322907 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.322921 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.322930 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:59Z","lastTransitionTime":"2025-11-24T17:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.425894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.425954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.425963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.425981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.425993 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:59Z","lastTransitionTime":"2025-11-24T17:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.528202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.528249 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.528260 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.528277 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.528289 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:59Z","lastTransitionTime":"2025-11-24T17:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.630688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.630728 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.630739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.630754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.630764 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:59Z","lastTransitionTime":"2025-11-24T17:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.734439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.734520 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.734536 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.734556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.734571 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:59Z","lastTransitionTime":"2025-11-24T17:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.837294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.837352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.837363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.837383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.837396 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:59Z","lastTransitionTime":"2025-11-24T17:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.897940 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.898024 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.898063 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:49:59 crc kubenswrapper[4768]: E1124 17:49:59.898167 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:49:59 crc kubenswrapper[4768]: E1124 17:49:59.898270 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:49:59 crc kubenswrapper[4768]: E1124 17:49:59.898533 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.940290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.940354 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.940363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.940382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:49:59 crc kubenswrapper[4768]: I1124 17:49:59.940393 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:49:59Z","lastTransitionTime":"2025-11-24T17:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.043657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.043796 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.043822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.043855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.043880 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:00Z","lastTransitionTime":"2025-11-24T17:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.146594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.146672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.146774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.146804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.146827 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:00Z","lastTransitionTime":"2025-11-24T17:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.250743 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.250796 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.250810 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.250874 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.250892 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:00Z","lastTransitionTime":"2025-11-24T17:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.354459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.354568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.354594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.354623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.354644 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:00Z","lastTransitionTime":"2025-11-24T17:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.459684 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.459737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.459751 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.459771 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.459786 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:00Z","lastTransitionTime":"2025-11-24T17:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.562876 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.562938 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.562956 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.562992 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.563010 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:00Z","lastTransitionTime":"2025-11-24T17:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.666420 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.666564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.666593 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.666623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.666642 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:00Z","lastTransitionTime":"2025-11-24T17:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.769821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.769872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.769884 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.769901 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.769912 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:00Z","lastTransitionTime":"2025-11-24T17:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.874078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.874143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.874157 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.874180 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.874199 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:00Z","lastTransitionTime":"2025-11-24T17:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.897317 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:00 crc kubenswrapper[4768]: E1124 17:50:00.897549 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.976825 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.976877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.976891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.976909 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:00 crc kubenswrapper[4768]: I1124 17:50:00.976923 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:00Z","lastTransitionTime":"2025-11-24T17:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.079962 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.080045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.080065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.080095 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.080111 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:01Z","lastTransitionTime":"2025-11-24T17:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.182201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.182257 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.182269 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.182287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.182300 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:01Z","lastTransitionTime":"2025-11-24T17:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.285057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.285378 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.285390 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.285406 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.285415 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:01Z","lastTransitionTime":"2025-11-24T17:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.387690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.387724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.387732 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.387746 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.387755 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:01Z","lastTransitionTime":"2025-11-24T17:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.490561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.490631 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.490648 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.490676 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.490694 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:01Z","lastTransitionTime":"2025-11-24T17:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.593093 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.593138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.593149 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.593167 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.593176 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:01Z","lastTransitionTime":"2025-11-24T17:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.695518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.695556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.695582 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.695598 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.695611 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:01Z","lastTransitionTime":"2025-11-24T17:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.797855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.797917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.797935 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.797958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.797981 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:01Z","lastTransitionTime":"2025-11-24T17:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.897571 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.897626 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:01 crc kubenswrapper[4768]: E1124 17:50:01.897714 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.897733 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:01 crc kubenswrapper[4768]: E1124 17:50:01.897837 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:01 crc kubenswrapper[4768]: E1124 17:50:01.897887 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.900865 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.900960 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.900971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.900986 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.900998 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:01Z","lastTransitionTime":"2025-11-24T17:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.909974 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.922572 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.940352 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:49:54Z\\\",\\\"message\\\":\\\"network=default are: map[]\\\\nI1124 17:49:54.952643 6195 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 17:49:54.952648 6195 services_controller.go:443] Built service openshift-controller-manager/controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.149\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1124 17:49:54.952649 6195 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}\\\\nI1124 17:49:54.952663 6195 services_controller.go:444] Built service openshift-controller-manager/controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 17:49:54.952672 6195 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.326013ms\\\\nF1124 17:49:54.952715 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.953961 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.964313 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.973540 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.986106 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:01 crc kubenswrapper[4768]: I1124 17:50:01.999693 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.002896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.002949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.002962 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.002981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.002994 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:02Z","lastTransitionTime":"2025-11-24T17:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.012585 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:02Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.023878 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:02Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.034109 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:02Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.044006 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:02Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.055796 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:02Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.066696 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:02Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.078128 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:02Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.088067 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:02Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.104798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.104829 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.104839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.104853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.104863 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:02Z","lastTransitionTime":"2025-11-24T17:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.207634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.207690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.207704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.207721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.207735 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:02Z","lastTransitionTime":"2025-11-24T17:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.311169 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.311216 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.311226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.311242 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.311252 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:02Z","lastTransitionTime":"2025-11-24T17:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.414451 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.414534 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.414545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.414565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.414576 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:02Z","lastTransitionTime":"2025-11-24T17:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.517085 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.517141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.517154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.517176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.517191 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:02Z","lastTransitionTime":"2025-11-24T17:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.619673 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.619744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.619756 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.619799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.619812 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:02Z","lastTransitionTime":"2025-11-24T17:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.722425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.722535 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.722546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.722567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.722578 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:02Z","lastTransitionTime":"2025-11-24T17:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.825197 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.825257 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.825267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.825283 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.825292 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:02Z","lastTransitionTime":"2025-11-24T17:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.897648 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:02 crc kubenswrapper[4768]: E1124 17:50:02.897796 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.928250 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.928302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.928315 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.928365 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:02 crc kubenswrapper[4768]: I1124 17:50:02.928380 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:02Z","lastTransitionTime":"2025-11-24T17:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.031819 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.031861 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.031872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.031890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.031902 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:03Z","lastTransitionTime":"2025-11-24T17:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.085528 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:03 crc kubenswrapper[4768]: E1124 17:50:03.085676 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:50:03 crc kubenswrapper[4768]: E1124 17:50:03.085740 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs podName:b50668f2-0a0b-40f4-9a38-3df082cf931e nodeName:}" failed. No retries permitted until 2025-11-24 17:50:11.085725314 +0000 UTC m=+49.946307091 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs") pod "network-metrics-daemon-hpd8h" (UID: "b50668f2-0a0b-40f4-9a38-3df082cf931e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.134961 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.135011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.135027 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.135050 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.135067 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:03Z","lastTransitionTime":"2025-11-24T17:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.237964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.238018 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.238029 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.238048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.238060 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:03Z","lastTransitionTime":"2025-11-24T17:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.341406 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.341478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.341518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.341546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.341562 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:03Z","lastTransitionTime":"2025-11-24T17:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.443904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.443940 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.443948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.443963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.443974 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:03Z","lastTransitionTime":"2025-11-24T17:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.546148 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.546187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.546197 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.546236 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.546249 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:03Z","lastTransitionTime":"2025-11-24T17:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.648507 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.648557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.648565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.648584 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.648595 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:03Z","lastTransitionTime":"2025-11-24T17:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.751989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.752044 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.752062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.752086 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.752104 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:03Z","lastTransitionTime":"2025-11-24T17:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.854788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.854822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.854835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.854851 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.854863 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:03Z","lastTransitionTime":"2025-11-24T17:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.897821 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:03 crc kubenswrapper[4768]: E1124 17:50:03.897958 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.898006 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.898130 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:03 crc kubenswrapper[4768]: E1124 17:50:03.898147 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:03 crc kubenswrapper[4768]: E1124 17:50:03.898305 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.957358 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.957415 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.957426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.957449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:03 crc kubenswrapper[4768]: I1124 17:50:03.957463 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:03Z","lastTransitionTime":"2025-11-24T17:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.059932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.059973 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.059984 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.060002 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.060012 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:04Z","lastTransitionTime":"2025-11-24T17:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.163170 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.163225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.163241 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.163263 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.163280 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:04Z","lastTransitionTime":"2025-11-24T17:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.266293 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.266344 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.266355 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.266371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.266382 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:04Z","lastTransitionTime":"2025-11-24T17:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.369166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.369227 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.369241 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.369261 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.369275 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:04Z","lastTransitionTime":"2025-11-24T17:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.472647 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.472716 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.472726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.472743 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.472753 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:04Z","lastTransitionTime":"2025-11-24T17:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.575220 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.575306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.575332 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.575365 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.575395 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:04Z","lastTransitionTime":"2025-11-24T17:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.678251 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.678307 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.678316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.678332 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.678343 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:04Z","lastTransitionTime":"2025-11-24T17:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.780374 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.780419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.780427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.780446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.780459 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:04Z","lastTransitionTime":"2025-11-24T17:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.883102 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.883180 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.883193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.883211 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.883223 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:04Z","lastTransitionTime":"2025-11-24T17:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.897745 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:04 crc kubenswrapper[4768]: E1124 17:50:04.897953 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.986312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.986357 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.986367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.986385 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:04 crc kubenswrapper[4768]: I1124 17:50:04.986394 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:04Z","lastTransitionTime":"2025-11-24T17:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.090098 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.090152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.090165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.090184 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.090200 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:05Z","lastTransitionTime":"2025-11-24T17:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.193039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.193102 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.193113 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.193129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.193143 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:05Z","lastTransitionTime":"2025-11-24T17:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.297073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.297142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.297157 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.297178 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.297192 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:05Z","lastTransitionTime":"2025-11-24T17:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.400708 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.400764 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.400782 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.400804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.400818 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:05Z","lastTransitionTime":"2025-11-24T17:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.504249 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.504290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.504299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.504316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.504326 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:05Z","lastTransitionTime":"2025-11-24T17:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.607647 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.607708 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.607717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.607746 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.607758 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:05Z","lastTransitionTime":"2025-11-24T17:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.709903 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.709941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.709949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.709967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.709977 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:05Z","lastTransitionTime":"2025-11-24T17:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.811988 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.812027 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.812037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.812052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.812064 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:05Z","lastTransitionTime":"2025-11-24T17:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.897968 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.898036 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:05 crc kubenswrapper[4768]: E1124 17:50:05.898201 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:05 crc kubenswrapper[4768]: E1124 17:50:05.898395 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.898089 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:05 crc kubenswrapper[4768]: E1124 17:50:05.898692 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.915379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.915443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.915455 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.915484 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:05 crc kubenswrapper[4768]: I1124 17:50:05.915532 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:05Z","lastTransitionTime":"2025-11-24T17:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.018866 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.018912 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.018924 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.018944 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.018965 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:06Z","lastTransitionTime":"2025-11-24T17:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.121767 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.121851 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.121867 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.121885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.121896 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:06Z","lastTransitionTime":"2025-11-24T17:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.224995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.225068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.225087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.225114 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.225132 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:06Z","lastTransitionTime":"2025-11-24T17:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.328078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.328148 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.328169 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.328195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.328212 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:06Z","lastTransitionTime":"2025-11-24T17:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.430788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.430835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.430844 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.430865 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.430889 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:06Z","lastTransitionTime":"2025-11-24T17:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.534073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.534139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.534162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.534192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.534211 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:06Z","lastTransitionTime":"2025-11-24T17:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.638094 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.638151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.638165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.638190 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.638207 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:06Z","lastTransitionTime":"2025-11-24T17:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.740422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.740466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.740476 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.740523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.740537 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:06Z","lastTransitionTime":"2025-11-24T17:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.844052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.844115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.844153 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.844186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.844209 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:06Z","lastTransitionTime":"2025-11-24T17:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.897971 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:06 crc kubenswrapper[4768]: E1124 17:50:06.898137 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.946836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.946868 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.946879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.946895 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:06 crc kubenswrapper[4768]: I1124 17:50:06.946907 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:06Z","lastTransitionTime":"2025-11-24T17:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.050088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.050152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.050177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.050206 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.050226 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.152623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.152674 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.152686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.152705 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.152720 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.255816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.255869 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.255878 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.255898 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.255913 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.358889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.358947 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.358956 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.358981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.358993 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.461400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.461469 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.461517 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.461543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.461560 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.564604 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.564659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.564669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.564692 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.564706 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.668465 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.668548 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.668558 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.668581 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.668597 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.771047 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.771111 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.771124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.771144 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.771155 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.875044 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.875100 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.875109 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.875135 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.875145 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.889787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.889833 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.889843 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.889862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.889873 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.897546 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.897612 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.897581 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:07 crc kubenswrapper[4768]: E1124 17:50:07.897780 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:07 crc kubenswrapper[4768]: E1124 17:50:07.897837 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:07 crc kubenswrapper[4768]: E1124 17:50:07.897929 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:07 crc kubenswrapper[4768]: E1124 17:50:07.904113 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:07Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.908766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.908831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.908845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.908866 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.908878 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: E1124 17:50:07.923193 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:07Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.928706 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.928772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.928787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.928809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.928821 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: E1124 17:50:07.941627 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:07Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.945314 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.945354 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.945365 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.945387 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.945400 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: E1124 17:50:07.958168 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:07Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.962106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.962159 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.962175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.962197 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.962211 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:07 crc kubenswrapper[4768]: E1124 17:50:07.979050 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:07Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:07 crc kubenswrapper[4768]: E1124 17:50:07.979183 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.981458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.981526 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.981542 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.981561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:07 crc kubenswrapper[4768]: I1124 17:50:07.981709 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:07Z","lastTransitionTime":"2025-11-24T17:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.084344 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.084385 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.084395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.084411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.084422 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:08Z","lastTransitionTime":"2025-11-24T17:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.187375 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.187428 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.187438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.187457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.187470 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:08Z","lastTransitionTime":"2025-11-24T17:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.290793 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.290848 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.290859 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.290880 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.290894 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:08Z","lastTransitionTime":"2025-11-24T17:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.394700 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.394757 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.394772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.394795 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.394809 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:08Z","lastTransitionTime":"2025-11-24T17:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.498166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.498223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.498242 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.498263 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.498278 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:08Z","lastTransitionTime":"2025-11-24T17:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.601545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.601597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.601609 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.601631 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.601643 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:08Z","lastTransitionTime":"2025-11-24T17:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.704420 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.704474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.704506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.704534 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.704547 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:08Z","lastTransitionTime":"2025-11-24T17:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.807654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.807742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.807756 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.807781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.807808 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:08Z","lastTransitionTime":"2025-11-24T17:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.898273 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:08 crc kubenswrapper[4768]: E1124 17:50:08.898428 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.911067 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.911154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.911168 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.911193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:08 crc kubenswrapper[4768]: I1124 17:50:08.911208 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:08Z","lastTransitionTime":"2025-11-24T17:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.014438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.014503 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.014514 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.014530 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.014539 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:09Z","lastTransitionTime":"2025-11-24T17:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.117332 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.117407 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.117422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.117445 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.117462 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:09Z","lastTransitionTime":"2025-11-24T17:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.220788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.220862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.220884 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.220912 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.220932 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:09Z","lastTransitionTime":"2025-11-24T17:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.324102 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.324157 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.324170 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.324187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.324199 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:09Z","lastTransitionTime":"2025-11-24T17:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.428597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.428685 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.428704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.428736 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.428756 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:09Z","lastTransitionTime":"2025-11-24T17:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.531701 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.531828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.531847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.531880 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.531899 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:09Z","lastTransitionTime":"2025-11-24T17:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.634154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.634255 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.634296 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.634335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.634359 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:09Z","lastTransitionTime":"2025-11-24T17:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.737619 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.737672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.737683 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.737709 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.737721 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:09Z","lastTransitionTime":"2025-11-24T17:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.841589 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.841634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.841651 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.841673 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.841690 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:09Z","lastTransitionTime":"2025-11-24T17:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.898254 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.898372 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:09 crc kubenswrapper[4768]: E1124 17:50:09.898559 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.898620 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:09 crc kubenswrapper[4768]: E1124 17:50:09.898768 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:09 crc kubenswrapper[4768]: E1124 17:50:09.899207 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.899591 4768 scope.go:117] "RemoveContainer" containerID="52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.944022 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.944079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.944096 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.944119 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:09 crc kubenswrapper[4768]: I1124 17:50:09.944278 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:09Z","lastTransitionTime":"2025-11-24T17:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.048150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.048324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.048408 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.048547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.048641 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:10Z","lastTransitionTime":"2025-11-24T17:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.150927 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.150969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.150980 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.150997 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.151012 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:10Z","lastTransitionTime":"2025-11-24T17:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.256403 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.256449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.256467 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.256503 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.256516 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:10Z","lastTransitionTime":"2025-11-24T17:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.259671 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/1.log" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.262653 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae"} Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.262856 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.281807 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.297143 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.319287 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:49:54Z\\\",\\\"message\\\":\\\"network=default are: map[]\\\\nI1124 17:49:54.952643 6195 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 17:49:54.952648 6195 services_controller.go:443] Built service openshift-controller-manager/controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.149\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1124 17:49:54.952649 6195 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}\\\\nI1124 17:49:54.952663 6195 services_controller.go:444] Built service openshift-controller-manager/controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 17:49:54.952672 6195 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.326013ms\\\\nF1124 17:49:54.952715 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.337396 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.355692 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.359410 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.359475 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.359525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.359551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.359567 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:10Z","lastTransitionTime":"2025-11-24T17:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.368810 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.383924 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.412833 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.431723 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.447551 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.459959 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.461156 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.461183 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.461192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.461214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.461224 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:10Z","lastTransitionTime":"2025-11-24T17:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.470286 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.482098 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.493157 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.503576 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.514218 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:10Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.563861 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.563901 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.563910 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.563926 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.563935 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:10Z","lastTransitionTime":"2025-11-24T17:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.666473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.666543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.666554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.666573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.666584 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:10Z","lastTransitionTime":"2025-11-24T17:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.769445 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.769527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.769544 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.769568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.769584 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:10Z","lastTransitionTime":"2025-11-24T17:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.872247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.872305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.872317 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.872336 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.872352 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:10Z","lastTransitionTime":"2025-11-24T17:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.898139 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:10 crc kubenswrapper[4768]: E1124 17:50:10.898365 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.974813 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.974864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.974876 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.974896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:10 crc kubenswrapper[4768]: I1124 17:50:10.974910 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:10Z","lastTransitionTime":"2025-11-24T17:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.078096 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.078178 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.078196 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.078222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.078240 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:11Z","lastTransitionTime":"2025-11-24T17:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.086634 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:11 crc kubenswrapper[4768]: E1124 17:50:11.086862 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:50:11 crc kubenswrapper[4768]: E1124 17:50:11.086992 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs podName:b50668f2-0a0b-40f4-9a38-3df082cf931e nodeName:}" failed. No retries permitted until 2025-11-24 17:50:27.086963918 +0000 UTC m=+65.947545735 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs") pod "network-metrics-daemon-hpd8h" (UID: "b50668f2-0a0b-40f4-9a38-3df082cf931e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.181274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.181314 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.181325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.181343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.181355 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:11Z","lastTransitionTime":"2025-11-24T17:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.269608 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/2.log" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.270423 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/1.log" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.273645 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae" exitCode=1 Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.273687 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae"} Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.273731 4768 scope.go:117] "RemoveContainer" containerID="52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.275003 4768 scope.go:117] "RemoveContainer" containerID="ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae" Nov 24 17:50:11 crc kubenswrapper[4768]: E1124 17:50:11.275575 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.283132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.283202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.283272 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.283303 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.283327 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:11Z","lastTransitionTime":"2025-11-24T17:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.300223 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.316387 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.330305 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.344260 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.356797 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.368982 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.383110 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.385583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.385628 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.385641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.385658 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.385671 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:11Z","lastTransitionTime":"2025-11-24T17:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.394200 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.407949 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.425064 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.437886 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.450867 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.464076 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.477429 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.488335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.488374 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.488383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.488400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.488411 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:11Z","lastTransitionTime":"2025-11-24T17:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.490702 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.510782 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:49:54Z\\\",\\\"message\\\":\\\"network=default are: map[]\\\\nI1124 17:49:54.952643 6195 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 17:49:54.952648 6195 services_controller.go:443] Built service openshift-controller-manager/controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.149\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1124 17:49:54.952649 6195 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}\\\\nI1124 17:49:54.952663 6195 services_controller.go:444] Built service openshift-controller-manager/controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 17:49:54.952672 6195 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.326013ms\\\\nF1124 17:49:54.952715 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:10Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831841 6396 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 17:50:10.831881 6396 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831943 6396 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 17:50:10.831982 6396 ovnkube.go:599] Stopped ovnkube\\\\nI1124 17:50:10.832011 6396 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 17:50:10.832099 6396 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.591789 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.591858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.591881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.591909 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.591934 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:11Z","lastTransitionTime":"2025-11-24T17:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.602237 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.694539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.694616 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.694640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.694669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.694689 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:11Z","lastTransitionTime":"2025-11-24T17:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.797454 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.797649 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.797680 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.797708 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.797727 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:11Z","lastTransitionTime":"2025-11-24T17:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.898231 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.898231 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:11 crc kubenswrapper[4768]: E1124 17:50:11.898460 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:11 crc kubenswrapper[4768]: E1124 17:50:11.898378 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.898609 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:11 crc kubenswrapper[4768]: E1124 17:50:11.898746 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.900114 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.900160 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.900172 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.900190 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.900202 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:11Z","lastTransitionTime":"2025-11-24T17:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.913663 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.925030 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.935195 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.946407 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.957645 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.969445 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.981144 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:11 crc kubenswrapper[4768]: I1124 17:50:11.997881 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:11Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.001463 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.001510 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.001529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.001546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.001555 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:12Z","lastTransitionTime":"2025-11-24T17:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.015456 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52293dab30d768f132c46502cbdfef0ea1361b1ea12fa41ce9bd87a95310b857\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:49:54Z\\\",\\\"message\\\":\\\"network=default are: map[]\\\\nI1124 17:49:54.952643 6195 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 17:49:54.952648 6195 services_controller.go:443] Built service openshift-controller-manager/controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.149\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1124 17:49:54.952649 6195 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}\\\\nI1124 17:49:54.952663 6195 services_controller.go:444] Built service openshift-controller-manager/controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1124 17:49:54.952672 6195 services_controller.go:360] Finished syncing service metrics on namespace openshift-apiserver-operator for network=default : 1.326013ms\\\\nF1124 17:49:54.952715 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:10Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831841 6396 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 17:50:10.831881 6396 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831943 6396 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 17:50:10.831982 6396 ovnkube.go:599] Stopped ovnkube\\\\nI1124 17:50:10.832011 6396 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 17:50:10.832099 6396 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.028062 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.040333 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.051288 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.062129 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.073634 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.087881 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.097402 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.103920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.103971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.103982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.103999 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.104010 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:12Z","lastTransitionTime":"2025-11-24T17:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.206912 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.206960 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.206995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.207015 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.207027 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:12Z","lastTransitionTime":"2025-11-24T17:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.280082 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/2.log" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.285820 4768 scope.go:117] "RemoveContainer" containerID="ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae" Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.286082 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.299198 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.309668 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.309705 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.309715 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.309630 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.309730 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.309890 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:12Z","lastTransitionTime":"2025-11-24T17:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.321447 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.337561 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.349379 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.363115 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.380737 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.396354 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.412711 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.412766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.412816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.412840 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.412889 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:12Z","lastTransitionTime":"2025-11-24T17:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.423961 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:10Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831841 6396 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 17:50:10.831881 6396 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831943 6396 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 17:50:10.831982 6396 ovnkube.go:599] Stopped ovnkube\\\\nI1124 17:50:10.832011 6396 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 17:50:10.832099 6396 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:50:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.437482 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.450708 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.466201 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.481784 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.494855 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.505901 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.514691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.514745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.514762 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.514783 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.514798 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:12Z","lastTransitionTime":"2025-11-24T17:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.521722 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.617033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.617305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.617325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.617345 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.617361 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:12Z","lastTransitionTime":"2025-11-24T17:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.622667 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.643709 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.653260 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.671161 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.689473 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.703567 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.716467 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.720335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.720390 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.720400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.720416 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.720428 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:12Z","lastTransitionTime":"2025-11-24T17:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.732474 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.745578 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.758218 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.771784 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.790338 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:10Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831841 6396 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 17:50:10.831881 6396 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831943 6396 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 17:50:10.831982 6396 ovnkube.go:599] Stopped ovnkube\\\\nI1124 17:50:10.832011 6396 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 17:50:10.832099 6396 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:50:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.800993 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.813001 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.822675 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.822709 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.822720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.822744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.822754 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:12Z","lastTransitionTime":"2025-11-24T17:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.830156 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.844222 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.854748 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.864416 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.898112 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.898231 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.904558 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.904755 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:50:44.904731369 +0000 UTC m=+83.765313146 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.904934 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.905042 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.905117 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.905140 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.905152 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.905193 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 17:50:44.905185311 +0000 UTC m=+83.765767168 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.905225 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.905303 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:50:44.905284573 +0000 UTC m=+83.765866400 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.905570 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.905678 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.905772 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.905824 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.905781 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.905837 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.905871 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:50:44.905860488 +0000 UTC m=+83.766442265 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:50:12 crc kubenswrapper[4768]: E1124 17:50:12.905897 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 17:50:44.905882298 +0000 UTC m=+83.766464075 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.925526 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.925559 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.925570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.925587 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:12 crc kubenswrapper[4768]: I1124 17:50:12.925598 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:12Z","lastTransitionTime":"2025-11-24T17:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.033122 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.033163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.033173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.033188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.033199 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:13Z","lastTransitionTime":"2025-11-24T17:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.136059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.136104 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.136118 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.136137 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.136150 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:13Z","lastTransitionTime":"2025-11-24T17:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.237820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.237864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.237874 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.237890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.237902 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:13Z","lastTransitionTime":"2025-11-24T17:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.339938 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.339969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.339978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.339992 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.340002 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:13Z","lastTransitionTime":"2025-11-24T17:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.442251 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.442302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.442319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.442345 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.442362 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:13Z","lastTransitionTime":"2025-11-24T17:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.544593 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.544619 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.544628 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.544643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.544653 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:13Z","lastTransitionTime":"2025-11-24T17:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.647422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.647677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.647823 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.647968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.648088 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:13Z","lastTransitionTime":"2025-11-24T17:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.750787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.751552 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.751692 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.751790 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.751895 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:13Z","lastTransitionTime":"2025-11-24T17:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.854608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.854686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.854711 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.854745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.854769 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:13Z","lastTransitionTime":"2025-11-24T17:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.897575 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.897656 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.897597 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:13 crc kubenswrapper[4768]: E1124 17:50:13.897732 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:13 crc kubenswrapper[4768]: E1124 17:50:13.897809 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:13 crc kubenswrapper[4768]: E1124 17:50:13.897893 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.957989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.958045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.958063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.958087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:13 crc kubenswrapper[4768]: I1124 17:50:13.958104 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:13Z","lastTransitionTime":"2025-11-24T17:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.061072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.061130 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.061147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.061171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.061187 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:14Z","lastTransitionTime":"2025-11-24T17:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.164206 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.164235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.164245 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.164258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.164268 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:14Z","lastTransitionTime":"2025-11-24T17:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.266835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.266905 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.266925 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.266952 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.267011 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:14Z","lastTransitionTime":"2025-11-24T17:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.369809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.369870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.369888 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.369911 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.369929 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:14Z","lastTransitionTime":"2025-11-24T17:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.473115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.473190 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.473209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.473251 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.473267 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:14Z","lastTransitionTime":"2025-11-24T17:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.576192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.576260 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.576282 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.576311 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.576330 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:14Z","lastTransitionTime":"2025-11-24T17:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.678932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.679012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.679032 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.679064 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.679083 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:14Z","lastTransitionTime":"2025-11-24T17:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.782073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.782142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.782165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.782194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.782216 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:14Z","lastTransitionTime":"2025-11-24T17:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.885296 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.885624 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.885782 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.885890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.885984 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:14Z","lastTransitionTime":"2025-11-24T17:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.897762 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:14 crc kubenswrapper[4768]: E1124 17:50:14.897919 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.988434 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.988770 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.988870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.988976 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:14 crc kubenswrapper[4768]: I1124 17:50:14.989074 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:14Z","lastTransitionTime":"2025-11-24T17:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.092108 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.092151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.092161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.092176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.092187 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:15Z","lastTransitionTime":"2025-11-24T17:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.195319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.195382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.195405 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.195434 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.195456 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:15Z","lastTransitionTime":"2025-11-24T17:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.297228 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.297292 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.297302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.297319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.297333 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:15Z","lastTransitionTime":"2025-11-24T17:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.401129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.401203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.401215 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.401235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.401275 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:15Z","lastTransitionTime":"2025-11-24T17:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.504347 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.504423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.504446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.504477 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.504539 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:15Z","lastTransitionTime":"2025-11-24T17:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.607214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.607272 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.607290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.607316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.607334 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:15Z","lastTransitionTime":"2025-11-24T17:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.710480 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.710620 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.710644 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.710675 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.710699 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:15Z","lastTransitionTime":"2025-11-24T17:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.813887 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.813927 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.813939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.813955 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.813966 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:15Z","lastTransitionTime":"2025-11-24T17:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.897535 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.897613 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.897630 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:15 crc kubenswrapper[4768]: E1124 17:50:15.897752 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:15 crc kubenswrapper[4768]: E1124 17:50:15.897840 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:15 crc kubenswrapper[4768]: E1124 17:50:15.897944 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.916368 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.916430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.916443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.916460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:15 crc kubenswrapper[4768]: I1124 17:50:15.916472 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:15Z","lastTransitionTime":"2025-11-24T17:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.019991 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.020041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.020049 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.020068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.020079 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:16Z","lastTransitionTime":"2025-11-24T17:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.123095 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.123152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.123169 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.123193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.123210 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:16Z","lastTransitionTime":"2025-11-24T17:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.225007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.225042 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.225051 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.225074 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.225086 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:16Z","lastTransitionTime":"2025-11-24T17:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.327292 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.327330 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.327341 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.327360 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.327374 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:16Z","lastTransitionTime":"2025-11-24T17:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.430359 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.430416 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.430432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.430453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.430469 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:16Z","lastTransitionTime":"2025-11-24T17:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.534058 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.534116 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.534141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.534162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.534177 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:16Z","lastTransitionTime":"2025-11-24T17:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.637506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.637542 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.637550 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.637566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.637579 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:16Z","lastTransitionTime":"2025-11-24T17:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.739874 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.739913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.739922 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.739938 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.739950 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:16Z","lastTransitionTime":"2025-11-24T17:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.842596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.842653 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.842664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.842680 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.842690 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:16Z","lastTransitionTime":"2025-11-24T17:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.898264 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:16 crc kubenswrapper[4768]: E1124 17:50:16.898465 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.944862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.944905 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.944915 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.944943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:16 crc kubenswrapper[4768]: I1124 17:50:16.944956 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:16Z","lastTransitionTime":"2025-11-24T17:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.047998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.048049 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.048059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.048077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.048088 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:17Z","lastTransitionTime":"2025-11-24T17:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.150031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.150079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.150089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.150105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.150116 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:17Z","lastTransitionTime":"2025-11-24T17:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.253102 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.253204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.253221 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.253245 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.253263 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:17Z","lastTransitionTime":"2025-11-24T17:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.356274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.356341 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.356353 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.356373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.356384 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:17Z","lastTransitionTime":"2025-11-24T17:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.459577 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.459635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.459657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.459686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.459708 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:17Z","lastTransitionTime":"2025-11-24T17:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.564514 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.564605 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.564633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.564667 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.564701 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:17Z","lastTransitionTime":"2025-11-24T17:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.667280 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.667365 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.667389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.667412 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.667428 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:17Z","lastTransitionTime":"2025-11-24T17:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.769817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.769900 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.769925 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.769954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.769975 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:17Z","lastTransitionTime":"2025-11-24T17:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.872470 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.872544 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.872557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.872573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.872584 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:17Z","lastTransitionTime":"2025-11-24T17:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.898307 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.898360 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.898370 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:17 crc kubenswrapper[4768]: E1124 17:50:17.898554 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:17 crc kubenswrapper[4768]: E1124 17:50:17.898662 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:17 crc kubenswrapper[4768]: E1124 17:50:17.898875 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.986094 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.986175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.986197 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.986229 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:17 crc kubenswrapper[4768]: I1124 17:50:17.986249 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:17Z","lastTransitionTime":"2025-11-24T17:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.089310 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.089378 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.089389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.089411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.089424 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.120274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.120337 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.120351 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.120372 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.120388 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: E1124 17:50:18.134329 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:18Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.138162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.138209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.138222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.138242 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.138256 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: E1124 17:50:18.153190 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:18Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.157002 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.157041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.157055 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.157072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.157085 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: E1124 17:50:18.172530 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:18Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.176179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.176215 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.176232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.176250 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.176262 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: E1124 17:50:18.189604 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:18Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.193590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.193636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.193647 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.193663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.193672 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: E1124 17:50:18.207534 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:18Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:18 crc kubenswrapper[4768]: E1124 17:50:18.207699 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.209351 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.209391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.209403 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.209419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.209431 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.312167 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.312220 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.312233 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.312250 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.312262 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.415179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.415252 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.415275 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.415307 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.415329 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.518781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.518829 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.518846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.518870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.518890 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.621961 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.622031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.622058 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.622089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.622112 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.725053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.725125 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.725138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.725159 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.725172 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.828113 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.828201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.828225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.828258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.828282 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.897886 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:18 crc kubenswrapper[4768]: E1124 17:50:18.898099 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.931560 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.931623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.931640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.931667 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:18 crc kubenswrapper[4768]: I1124 17:50:18.931685 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:18Z","lastTransitionTime":"2025-11-24T17:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.041056 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.041124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.041140 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.041164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.041182 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:19Z","lastTransitionTime":"2025-11-24T17:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.145342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.145379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.145390 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.145406 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.145417 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:19Z","lastTransitionTime":"2025-11-24T17:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.248379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.248540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.248562 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.248588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.248606 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:19Z","lastTransitionTime":"2025-11-24T17:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.351658 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.351691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.351702 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.351717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.351727 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:19Z","lastTransitionTime":"2025-11-24T17:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.454338 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.454418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.454433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.454453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.454470 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:19Z","lastTransitionTime":"2025-11-24T17:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.557782 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.557853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.557873 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.557902 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.557920 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:19Z","lastTransitionTime":"2025-11-24T17:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.659788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.659830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.659842 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.659859 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.659871 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:19Z","lastTransitionTime":"2025-11-24T17:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.762668 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.762721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.762736 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.762755 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.762769 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:19Z","lastTransitionTime":"2025-11-24T17:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.866036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.866079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.866089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.866106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.866118 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:19Z","lastTransitionTime":"2025-11-24T17:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.898241 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.898363 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.898741 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:19 crc kubenswrapper[4768]: E1124 17:50:19.898784 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:19 crc kubenswrapper[4768]: E1124 17:50:19.898890 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:19 crc kubenswrapper[4768]: E1124 17:50:19.898740 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.968761 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.968828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.968846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.968868 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:19 crc kubenswrapper[4768]: I1124 17:50:19.968884 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:19Z","lastTransitionTime":"2025-11-24T17:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.071407 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.071523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.071543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.071569 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.071589 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:20Z","lastTransitionTime":"2025-11-24T17:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.176697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.176752 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.176768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.176791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.176808 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:20Z","lastTransitionTime":"2025-11-24T17:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.279955 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.280017 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.280040 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.280069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.280092 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:20Z","lastTransitionTime":"2025-11-24T17:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.382312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.382399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.382424 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.382459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.382480 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:20Z","lastTransitionTime":"2025-11-24T17:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.484961 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.485029 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.485047 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.485072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.485091 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:20Z","lastTransitionTime":"2025-11-24T17:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.588903 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.588948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.588958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.588978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.588995 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:20Z","lastTransitionTime":"2025-11-24T17:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.691886 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.691966 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.691987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.692015 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.692036 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:20Z","lastTransitionTime":"2025-11-24T17:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.795440 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.795581 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.795612 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.795648 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.795670 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:20Z","lastTransitionTime":"2025-11-24T17:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.897679 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:20 crc kubenswrapper[4768]: E1124 17:50:20.898323 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.901064 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.901133 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.901161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.901193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:20 crc kubenswrapper[4768]: I1124 17:50:20.901222 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:20Z","lastTransitionTime":"2025-11-24T17:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.004913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.004964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.004974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.004991 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.005002 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:21Z","lastTransitionTime":"2025-11-24T17:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.108392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.108479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.108539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.108576 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.108600 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:21Z","lastTransitionTime":"2025-11-24T17:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.211673 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.211729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.211742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.211759 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.211770 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:21Z","lastTransitionTime":"2025-11-24T17:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.313132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.313204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.313219 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.313240 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.313252 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:21Z","lastTransitionTime":"2025-11-24T17:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.416984 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.417043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.417056 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.417078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.417097 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:21Z","lastTransitionTime":"2025-11-24T17:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.520474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.520551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.520560 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.520582 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.520594 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:21Z","lastTransitionTime":"2025-11-24T17:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.623435 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.623551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.623575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.623605 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.623626 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:21Z","lastTransitionTime":"2025-11-24T17:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.726926 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.727004 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.727026 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.727053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.727076 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:21Z","lastTransitionTime":"2025-11-24T17:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.830738 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.830806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.830829 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.830858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.830879 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:21Z","lastTransitionTime":"2025-11-24T17:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.897969 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.897983 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:21 crc kubenswrapper[4768]: E1124 17:50:21.898274 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.898056 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:21 crc kubenswrapper[4768]: E1124 17:50:21.898682 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:21 crc kubenswrapper[4768]: E1124 17:50:21.898378 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.921480 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:21Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.933601 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.933857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.933984 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.934186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.934380 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:21Z","lastTransitionTime":"2025-11-24T17:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.941345 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:21Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.958124 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:21Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.970850 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:21Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:21 crc kubenswrapper[4768]: I1124 17:50:21.985210 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:21Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.005984 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.027365 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.037623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.037685 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.037698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.037738 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.037747 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:22Z","lastTransitionTime":"2025-11-24T17:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.051375 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:10Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831841 6396 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 17:50:10.831881 6396 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831943 6396 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 17:50:10.831982 6396 ovnkube.go:599] Stopped ovnkube\\\\nI1124 17:50:10.832011 6396 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 17:50:10.832099 6396 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:50:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.070901 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.087422 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.101047 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8d6db5-a1f0-4a91-96b7-636304d925db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5368b95b504f69098e8059eab5d10a29142319fedc02aa3421d2f133fa1dbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85638081d180f3f49a5865193eb7baf9777cafcbd197443feec23cc087f0e52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95daff4f04063e1c9db4e0dfc63a119a4ad136c47a453a705d4c481aaf03e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.111588 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.124992 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.138781 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.140826 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.140871 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.140889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.140916 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.140934 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:22Z","lastTransitionTime":"2025-11-24T17:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.155113 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.168666 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.181753 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.242787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.242847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.242857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.242877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.242888 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:22Z","lastTransitionTime":"2025-11-24T17:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.345858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.345911 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.345928 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.345951 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.345969 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:22Z","lastTransitionTime":"2025-11-24T17:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.449024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.449092 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.449136 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.449171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.449197 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:22Z","lastTransitionTime":"2025-11-24T17:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.552112 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.552167 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.552176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.552194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.552204 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:22Z","lastTransitionTime":"2025-11-24T17:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.654663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.654710 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.654718 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.654733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.654743 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:22Z","lastTransitionTime":"2025-11-24T17:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.757553 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.757601 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.757611 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.757629 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.757641 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:22Z","lastTransitionTime":"2025-11-24T17:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.860952 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.860989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.860998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.861012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.861024 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:22Z","lastTransitionTime":"2025-11-24T17:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.897277 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:22 crc kubenswrapper[4768]: E1124 17:50:22.897419 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.963846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.963889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.963898 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.963913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:22 crc kubenswrapper[4768]: I1124 17:50:22.963922 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:22Z","lastTransitionTime":"2025-11-24T17:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.066921 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.066997 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.067013 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.067030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.067039 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:23Z","lastTransitionTime":"2025-11-24T17:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.172478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.172583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.172606 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.172633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.172653 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:23Z","lastTransitionTime":"2025-11-24T17:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.275449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.275526 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.275538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.275578 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.275589 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:23Z","lastTransitionTime":"2025-11-24T17:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.377592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.377632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.377645 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.377660 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.377671 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:23Z","lastTransitionTime":"2025-11-24T17:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.480360 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.480392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.480402 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.480417 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.480431 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:23Z","lastTransitionTime":"2025-11-24T17:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.582868 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.582937 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.582954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.582982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.583000 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:23Z","lastTransitionTime":"2025-11-24T17:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.685583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.685618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.685631 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.685648 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.685659 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:23Z","lastTransitionTime":"2025-11-24T17:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.789201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.789306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.789339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.789371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.789394 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:23Z","lastTransitionTime":"2025-11-24T17:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.892005 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.892033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.892045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.892063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.892074 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:23Z","lastTransitionTime":"2025-11-24T17:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.897736 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.897768 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.897799 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:23 crc kubenswrapper[4768]: E1124 17:50:23.897876 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:23 crc kubenswrapper[4768]: E1124 17:50:23.898102 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:23 crc kubenswrapper[4768]: E1124 17:50:23.898180 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.995014 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.995046 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.995057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.995073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:23 crc kubenswrapper[4768]: I1124 17:50:23.995084 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:23Z","lastTransitionTime":"2025-11-24T17:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.097934 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.097972 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.097983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.097998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.098009 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:24Z","lastTransitionTime":"2025-11-24T17:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.200510 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.200539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.200549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.200563 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.200575 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:24Z","lastTransitionTime":"2025-11-24T17:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.303245 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.303286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.303299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.303314 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.303326 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:24Z","lastTransitionTime":"2025-11-24T17:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.405908 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.405957 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.405968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.405985 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.405997 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:24Z","lastTransitionTime":"2025-11-24T17:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.509108 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.509175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.509195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.509218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.509236 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:24Z","lastTransitionTime":"2025-11-24T17:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.613170 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.613236 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.613259 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.613288 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.613310 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:24Z","lastTransitionTime":"2025-11-24T17:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.715899 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.715936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.715948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.715965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.715976 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:24Z","lastTransitionTime":"2025-11-24T17:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.819286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.819346 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.819382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.819415 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.819439 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:24Z","lastTransitionTime":"2025-11-24T17:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.898304 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:24 crc kubenswrapper[4768]: E1124 17:50:24.898536 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.922885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.922939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.922960 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.922989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:24 crc kubenswrapper[4768]: I1124 17:50:24.923012 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:24Z","lastTransitionTime":"2025-11-24T17:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.026304 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.026356 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.026367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.026387 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.026401 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:25Z","lastTransitionTime":"2025-11-24T17:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.129146 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.129217 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.129233 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.129247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.129256 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:25Z","lastTransitionTime":"2025-11-24T17:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.231602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.231655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.231692 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.231711 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.231722 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:25Z","lastTransitionTime":"2025-11-24T17:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.334247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.334302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.334319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.334343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.334360 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:25Z","lastTransitionTime":"2025-11-24T17:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.437181 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.437231 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.437242 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.437257 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.437271 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:25Z","lastTransitionTime":"2025-11-24T17:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.539577 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.539619 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.539630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.539646 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.539656 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:25Z","lastTransitionTime":"2025-11-24T17:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.643124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.643161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.643170 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.643190 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.643199 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:25Z","lastTransitionTime":"2025-11-24T17:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.745849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.745888 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.745896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.745910 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.745918 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:25Z","lastTransitionTime":"2025-11-24T17:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.847948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.847975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.847982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.847995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.848004 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:25Z","lastTransitionTime":"2025-11-24T17:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.897668 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.897674 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.897916 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:25 crc kubenswrapper[4768]: E1124 17:50:25.898005 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:25 crc kubenswrapper[4768]: E1124 17:50:25.898223 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.898284 4768 scope.go:117] "RemoveContainer" containerID="ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae" Nov 24 17:50:25 crc kubenswrapper[4768]: E1124 17:50:25.898338 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:25 crc kubenswrapper[4768]: E1124 17:50:25.898416 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.949850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.949879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.949887 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.949900 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:25 crc kubenswrapper[4768]: I1124 17:50:25.949910 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:25Z","lastTransitionTime":"2025-11-24T17:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.052432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.052462 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.052470 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.052506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.052515 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:26Z","lastTransitionTime":"2025-11-24T17:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.155139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.155192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.155203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.155217 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.155226 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:26Z","lastTransitionTime":"2025-11-24T17:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.257673 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.257714 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.257726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.257741 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.257779 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:26Z","lastTransitionTime":"2025-11-24T17:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.359839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.359892 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.359902 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.359918 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.359929 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:26Z","lastTransitionTime":"2025-11-24T17:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.462213 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.462258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.462273 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.462289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.462301 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:26Z","lastTransitionTime":"2025-11-24T17:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.564950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.565000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.565012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.565030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.565041 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:26Z","lastTransitionTime":"2025-11-24T17:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.667226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.667266 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.667278 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.667295 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.667307 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:26Z","lastTransitionTime":"2025-11-24T17:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.769962 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.769990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.769998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.770011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.770021 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:26Z","lastTransitionTime":"2025-11-24T17:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.871593 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.871863 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.871955 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.872031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.872096 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:26Z","lastTransitionTime":"2025-11-24T17:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.897197 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:26 crc kubenswrapper[4768]: E1124 17:50:26.897454 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.974837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.974883 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.974896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.974914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:26 crc kubenswrapper[4768]: I1124 17:50:26.974931 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:26Z","lastTransitionTime":"2025-11-24T17:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.076993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.077042 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.077059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.077081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.077098 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:27Z","lastTransitionTime":"2025-11-24T17:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.161706 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:27 crc kubenswrapper[4768]: E1124 17:50:27.161915 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:50:27 crc kubenswrapper[4768]: E1124 17:50:27.162025 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs podName:b50668f2-0a0b-40f4-9a38-3df082cf931e nodeName:}" failed. No retries permitted until 2025-11-24 17:50:59.16199968 +0000 UTC m=+98.022581497 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs") pod "network-metrics-daemon-hpd8h" (UID: "b50668f2-0a0b-40f4-9a38-3df082cf931e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.179545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.179590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.179608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.179630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.179645 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:27Z","lastTransitionTime":"2025-11-24T17:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.282093 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.282189 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.282209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.282231 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.282248 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:27Z","lastTransitionTime":"2025-11-24T17:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.383778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.383818 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.383828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.383843 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.383856 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:27Z","lastTransitionTime":"2025-11-24T17:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.486736 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.486978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.487100 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.487214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.487296 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:27Z","lastTransitionTime":"2025-11-24T17:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.589713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.590030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.590131 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.590225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.590325 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:27Z","lastTransitionTime":"2025-11-24T17:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.692994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.693058 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.693074 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.693100 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.693116 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:27Z","lastTransitionTime":"2025-11-24T17:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.794874 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.794959 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.794976 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.794998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.795015 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:27Z","lastTransitionTime":"2025-11-24T17:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.897403 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.897456 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.897521 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:27 crc kubenswrapper[4768]: E1124 17:50:27.897541 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:27 crc kubenswrapper[4768]: E1124 17:50:27.897619 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:27 crc kubenswrapper[4768]: E1124 17:50:27.897727 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.897771 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.897820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.897834 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.897857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:27 crc kubenswrapper[4768]: I1124 17:50:27.897872 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:27Z","lastTransitionTime":"2025-11-24T17:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.001906 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.001963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.001975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.001993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.002005 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.104022 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.104069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.104080 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.104096 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.104139 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.206079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.206109 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.206125 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.206143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.206154 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.308141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.308179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.308188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.308201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.308211 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: E1124 17:50:28.319817 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.323659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.323713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.323722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.323737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.323746 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: E1124 17:50:28.335342 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.339052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.339096 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.339106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.339121 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.339131 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: E1124 17:50:28.351480 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.355928 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.355969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.355982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.356000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.356012 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: E1124 17:50:28.371577 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.375242 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.375267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.375276 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.375290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.375298 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: E1124 17:50:28.388522 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:28 crc kubenswrapper[4768]: E1124 17:50:28.388707 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.390544 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.390571 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.390580 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.390594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.390604 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.492611 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.492677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.492686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.492701 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.492710 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.594720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.594932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.594940 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.594952 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.594960 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.697861 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.697902 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.697914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.697930 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.697941 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.800525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.800562 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.800570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.800585 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.800594 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.898002 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:28 crc kubenswrapper[4768]: E1124 17:50:28.898135 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.902561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.902591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.902599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.902613 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:28 crc kubenswrapper[4768]: I1124 17:50:28.902622 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:28Z","lastTransitionTime":"2025-11-24T17:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.004405 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.004432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.004440 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.004452 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.004462 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:29Z","lastTransitionTime":"2025-11-24T17:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.107349 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.107419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.107431 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.107883 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.107916 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:29Z","lastTransitionTime":"2025-11-24T17:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.210537 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.210580 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.210590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.210608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.210618 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:29Z","lastTransitionTime":"2025-11-24T17:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.312474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.312531 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.312542 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.312558 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.312571 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:29Z","lastTransitionTime":"2025-11-24T17:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.414993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.415027 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.415038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.415053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.415063 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:29Z","lastTransitionTime":"2025-11-24T17:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.516991 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.517018 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.517026 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.517041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.517053 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:29Z","lastTransitionTime":"2025-11-24T17:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.619455 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.619545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.619599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.619627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.619648 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:29Z","lastTransitionTime":"2025-11-24T17:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.721795 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.721836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.721845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.721860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.721870 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:29Z","lastTransitionTime":"2025-11-24T17:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.823977 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.824024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.824038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.824057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.824068 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:29Z","lastTransitionTime":"2025-11-24T17:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.897632 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.897695 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:29 crc kubenswrapper[4768]: E1124 17:50:29.897823 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.897846 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:29 crc kubenswrapper[4768]: E1124 17:50:29.897988 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:29 crc kubenswrapper[4768]: E1124 17:50:29.898004 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.910526 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.926938 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.927058 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.927079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.927101 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:29 crc kubenswrapper[4768]: I1124 17:50:29.927118 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:29Z","lastTransitionTime":"2025-11-24T17:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.029775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.029872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.029890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.029912 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.029927 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:30Z","lastTransitionTime":"2025-11-24T17:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.132642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.132694 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.132706 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.132726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.132739 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:30Z","lastTransitionTime":"2025-11-24T17:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.235314 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.235355 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.235365 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.235381 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.235392 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:30Z","lastTransitionTime":"2025-11-24T17:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.337048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.337078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.337088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.337101 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.337130 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:30Z","lastTransitionTime":"2025-11-24T17:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.338683 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vssnl_895270a4-4f6a-4be4-9701-8a0f9cbf73d7/kube-multus/0.log" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.338742 4768 generic.go:334] "Generic (PLEG): container finished" podID="895270a4-4f6a-4be4-9701-8a0f9cbf73d7" containerID="e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462" exitCode=1 Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.338819 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vssnl" event={"ID":"895270a4-4f6a-4be4-9701-8a0f9cbf73d7","Type":"ContainerDied","Data":"e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462"} Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.339318 4768 scope.go:117] "RemoveContainer" containerID="e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.356128 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.369015 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.381463 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.395664 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.406730 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.420011 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.431541 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8d6db5-a1f0-4a91-96b7-636304d925db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5368b95b504f69098e8059eab5d10a29142319fedc02aa3421d2f133fa1dbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85638081d180f3f49a5865193eb7baf9777cafcbd197443feec23cc087f0e52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95daff4f04063e1c9db4e0dfc63a119a4ad136c47a453a705d4c481aaf03e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.439697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.440014 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.440027 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.440045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.440056 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:30Z","lastTransitionTime":"2025-11-24T17:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.445705 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.458948 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.480946 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:10Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831841 6396 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 17:50:10.831881 6396 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831943 6396 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 17:50:10.831982 6396 ovnkube.go:599] Stopped ovnkube\\\\nI1124 17:50:10.832011 6396 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 17:50:10.832099 6396 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:50:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.499512 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.512189 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.526197 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:30Z\\\",\\\"message\\\":\\\"2025-11-24T17:49:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_243ebbd2-89b4-4c72-ab1d-3f4619d11370\\\\n2025-11-24T17:49:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_243ebbd2-89b4-4c72-ab1d-3f4619d11370 to /host/opt/cni/bin/\\\\n2025-11-24T17:49:45Z [verbose] multus-daemon started\\\\n2025-11-24T17:49:45Z [verbose] Readiness Indicator file check\\\\n2025-11-24T17:50:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.536539 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.542101 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.542138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.542150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.542165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.542181 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:30Z","lastTransitionTime":"2025-11-24T17:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.545349 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b190dd-915a-4160-adc8-5f7cea62aed8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fc36405a12358007f4b3e5aa6fd8cfa3d50864042eae28769c853b38e1a52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a463a138019bef8b5c936e83f9d0bd1713b4e2440cea5c8f21b80a7a329619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84a463a138019bef8b5c936e83f9d0bd1713b4e2440cea5c8f21b80a7a329619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.556162 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.565242 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.577190 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:30Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.644309 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.644344 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.644352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.644366 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.644375 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:30Z","lastTransitionTime":"2025-11-24T17:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.746948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.746986 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.746996 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.747012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.747023 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:30Z","lastTransitionTime":"2025-11-24T17:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.849000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.849040 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.849050 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.849066 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.849079 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:30Z","lastTransitionTime":"2025-11-24T17:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.897444 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:30 crc kubenswrapper[4768]: E1124 17:50:30.897581 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.950670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.950715 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.950729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.950747 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:30 crc kubenswrapper[4768]: I1124 17:50:30.950758 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:30Z","lastTransitionTime":"2025-11-24T17:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.053240 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.053288 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.053299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.053316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.053327 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:31Z","lastTransitionTime":"2025-11-24T17:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.156362 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.156423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.156436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.156458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.156472 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:31Z","lastTransitionTime":"2025-11-24T17:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.260048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.260853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.260884 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.260912 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.260935 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:31Z","lastTransitionTime":"2025-11-24T17:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.343775 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vssnl_895270a4-4f6a-4be4-9701-8a0f9cbf73d7/kube-multus/0.log" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.343854 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vssnl" event={"ID":"895270a4-4f6a-4be4-9701-8a0f9cbf73d7","Type":"ContainerStarted","Data":"344484ec32fe5f65cce2d4cb54a12496a32add2fb0a678735b23d75dacfd3ea2"} Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.356262 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b190dd-915a-4160-adc8-5f7cea62aed8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fc36405a12358007f4b3e5aa6fd8cfa3d50864042eae28769c853b38e1a52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a463a138019bef8b5c936e83f9d0bd1713b4e2440cea5c8f21b80a7a329619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84a463a138019bef8b5c936e83f9d0bd1713b4e2440cea5c8f21b80a7a329619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.362835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.362879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.362890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.362913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.362935 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:31Z","lastTransitionTime":"2025-11-24T17:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.368698 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.381223 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344484ec32fe5f65cce2d4cb54a12496a32add2fb0a678735b23d75dacfd3ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:30Z\\\",\\\"message\\\":\\\"2025-11-24T17:49:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_243ebbd2-89b4-4c72-ab1d-3f4619d11370\\\\n2025-11-24T17:49:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_243ebbd2-89b4-4c72-ab1d-3f4619d11370 to /host/opt/cni/bin/\\\\n2025-11-24T17:49:45Z [verbose] multus-daemon started\\\\n2025-11-24T17:49:45Z [verbose] Readiness Indicator file check\\\\n2025-11-24T17:50:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.391193 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.403313 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.415056 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.426346 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.437868 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.451751 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.462704 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.465239 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.465264 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.465274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.465290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.465300 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:31Z","lastTransitionTime":"2025-11-24T17:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.473582 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.487921 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.507870 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:10Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831841 6396 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 17:50:10.831881 6396 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831943 6396 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 17:50:10.831982 6396 ovnkube.go:599] Stopped ovnkube\\\\nI1124 17:50:10.832011 6396 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 17:50:10.832099 6396 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:50:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.521969 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.535277 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.546776 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8d6db5-a1f0-4a91-96b7-636304d925db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5368b95b504f69098e8059eab5d10a29142319fedc02aa3421d2f133fa1dbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85638081d180f3f49a5865193eb7baf9777cafcbd197443feec23cc087f0e52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95daff4f04063e1c9db4e0dfc63a119a4ad136c47a453a705d4c481aaf03e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.559302 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.567206 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.567240 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.567252 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.567270 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.567280 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:31Z","lastTransitionTime":"2025-11-24T17:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.571186 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.669645 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.669745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.669771 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.669800 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.669818 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:31Z","lastTransitionTime":"2025-11-24T17:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.777823 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.777907 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.777932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.777962 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.777983 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:31Z","lastTransitionTime":"2025-11-24T17:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.881132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.881177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.881188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.881203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.881217 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:31Z","lastTransitionTime":"2025-11-24T17:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.897881 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:31 crc kubenswrapper[4768]: E1124 17:50:31.898031 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.898053 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:31 crc kubenswrapper[4768]: E1124 17:50:31.898161 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.898220 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:31 crc kubenswrapper[4768]: E1124 17:50:31.898278 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.916730 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.930825 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8d6db5-a1f0-4a91-96b7-636304d925db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5368b95b504f69098e8059eab5d10a29142319fedc02aa3421d2f133fa1dbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85638081d180f3f49a5865193eb7baf9777cafcbd197443feec23cc087f0e52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95daff4f04063e1c9db4e0dfc63a119a4ad136c47a453a705d4c481aaf03e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.946202 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.964235 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.983024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.983070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.983085 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.983102 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.983113 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:31Z","lastTransitionTime":"2025-11-24T17:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:31 crc kubenswrapper[4768]: I1124 17:50:31.997523 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:10Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831841 6396 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 17:50:10.831881 6396 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831943 6396 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 17:50:10.831982 6396 ovnkube.go:599] Stopped ovnkube\\\\nI1124 17:50:10.832011 6396 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 17:50:10.832099 6396 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:50:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:31Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.013636 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.026006 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.040848 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344484ec32fe5f65cce2d4cb54a12496a32add2fb0a678735b23d75dacfd3ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:30Z\\\",\\\"message\\\":\\\"2025-11-24T17:49:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_243ebbd2-89b4-4c72-ab1d-3f4619d11370\\\\n2025-11-24T17:49:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_243ebbd2-89b4-4c72-ab1d-3f4619d11370 to /host/opt/cni/bin/\\\\n2025-11-24T17:49:45Z [verbose] multus-daemon started\\\\n2025-11-24T17:49:45Z [verbose] Readiness Indicator file check\\\\n2025-11-24T17:50:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.051075 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.062760 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b190dd-915a-4160-adc8-5f7cea62aed8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fc36405a12358007f4b3e5aa6fd8cfa3d50864042eae28769c853b38e1a52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a463a138019bef8b5c936e83f9d0bd1713b4e2440cea5c8f21b80a7a329619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84a463a138019bef8b5c936e83f9d0bd1713b4e2440cea5c8f21b80a7a329619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.074370 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.083223 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.084959 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.085023 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.085034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.085057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.085073 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:32Z","lastTransitionTime":"2025-11-24T17:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.097283 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.110224 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.123772 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.136422 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.152888 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.166898 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.188697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.188761 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.188806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.188854 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.188878 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:32Z","lastTransitionTime":"2025-11-24T17:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.291081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.291114 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.291124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.291138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.291147 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:32Z","lastTransitionTime":"2025-11-24T17:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.398232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.398335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.398358 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.398399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.398429 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:32Z","lastTransitionTime":"2025-11-24T17:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.502440 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.502512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.502528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.502551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.502566 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:32Z","lastTransitionTime":"2025-11-24T17:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.604970 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.605018 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.605030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.605048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.605060 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:32Z","lastTransitionTime":"2025-11-24T17:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.706733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.706774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.706786 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.706825 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.706835 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:32Z","lastTransitionTime":"2025-11-24T17:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.809279 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.809342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.809363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.809391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.809412 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:32Z","lastTransitionTime":"2025-11-24T17:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.897478 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:32 crc kubenswrapper[4768]: E1124 17:50:32.897643 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.912188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.912248 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.912260 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.912275 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:32 crc kubenswrapper[4768]: I1124 17:50:32.912287 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:32Z","lastTransitionTime":"2025-11-24T17:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.015109 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.015182 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.015200 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.015226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.015245 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:33Z","lastTransitionTime":"2025-11-24T17:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.118201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.118245 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.118256 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.118273 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.118286 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:33Z","lastTransitionTime":"2025-11-24T17:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.220212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.220267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.220284 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.220309 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.220326 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:33Z","lastTransitionTime":"2025-11-24T17:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.323769 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.323808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.323817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.323832 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.323843 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:33Z","lastTransitionTime":"2025-11-24T17:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.425856 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.425923 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.425937 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.425954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.425967 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:33Z","lastTransitionTime":"2025-11-24T17:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.528073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.528115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.528128 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.528147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.528157 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:33Z","lastTransitionTime":"2025-11-24T17:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.630173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.630210 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.630219 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.630241 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.630253 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:33Z","lastTransitionTime":"2025-11-24T17:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.733304 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.733352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.733364 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.733380 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.733391 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:33Z","lastTransitionTime":"2025-11-24T17:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.835808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.835852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.835873 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.835903 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.835924 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:33Z","lastTransitionTime":"2025-11-24T17:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.898058 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:33 crc kubenswrapper[4768]: E1124 17:50:33.898181 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.898079 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.898260 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:33 crc kubenswrapper[4768]: E1124 17:50:33.898325 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:33 crc kubenswrapper[4768]: E1124 17:50:33.898419 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.938373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.938412 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.938427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.938443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:33 crc kubenswrapper[4768]: I1124 17:50:33.938454 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:33Z","lastTransitionTime":"2025-11-24T17:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.042743 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.042800 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.042817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.042845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.042860 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:34Z","lastTransitionTime":"2025-11-24T17:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.145586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.145642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.145654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.145675 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.145687 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:34Z","lastTransitionTime":"2025-11-24T17:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.248338 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.248365 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.248373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.248396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.248409 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:34Z","lastTransitionTime":"2025-11-24T17:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.351847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.351879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.351886 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.351902 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.351911 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:34Z","lastTransitionTime":"2025-11-24T17:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.454723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.454763 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.454771 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.454786 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.454795 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:34Z","lastTransitionTime":"2025-11-24T17:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.558105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.558165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.558177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.558195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.558206 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:34Z","lastTransitionTime":"2025-11-24T17:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.660939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.661007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.661030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.661060 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.661083 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:34Z","lastTransitionTime":"2025-11-24T17:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.764076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.764134 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.764155 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.764179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.764197 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:34Z","lastTransitionTime":"2025-11-24T17:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.866799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.866860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.866883 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.866912 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.866935 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:34Z","lastTransitionTime":"2025-11-24T17:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.897561 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:34 crc kubenswrapper[4768]: E1124 17:50:34.897691 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.969195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.969271 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.969284 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.969303 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:34 crc kubenswrapper[4768]: I1124 17:50:34.969316 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:34Z","lastTransitionTime":"2025-11-24T17:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.072401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.072464 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.072482 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.072529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.072553 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:35Z","lastTransitionTime":"2025-11-24T17:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.176017 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.176289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.176532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.176681 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.176780 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:35Z","lastTransitionTime":"2025-11-24T17:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.280321 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.280362 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.280374 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.280392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.280404 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:35Z","lastTransitionTime":"2025-11-24T17:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.383453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.383522 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.383531 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.383549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.383559 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:35Z","lastTransitionTime":"2025-11-24T17:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.485807 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.486202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.486322 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.486619 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.486864 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:35Z","lastTransitionTime":"2025-11-24T17:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.590447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.590982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.591142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.591306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.591436 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:35Z","lastTransitionTime":"2025-11-24T17:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.693983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.694042 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.694055 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.694070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.694079 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:35Z","lastTransitionTime":"2025-11-24T17:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.796775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.796809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.796819 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.796834 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.796846 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:35Z","lastTransitionTime":"2025-11-24T17:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.899934 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:35 crc kubenswrapper[4768]: E1124 17:50:35.900250 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.900474 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:35 crc kubenswrapper[4768]: E1124 17:50:35.900609 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.900799 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:35 crc kubenswrapper[4768]: E1124 17:50:35.900912 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.902354 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.902379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.902389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.902403 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:35 crc kubenswrapper[4768]: I1124 17:50:35.902413 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:35Z","lastTransitionTime":"2025-11-24T17:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.007809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.007865 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.007878 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.007895 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.007907 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:36Z","lastTransitionTime":"2025-11-24T17:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.111738 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.111804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.111836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.111864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.111885 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:36Z","lastTransitionTime":"2025-11-24T17:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.214946 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.215291 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.215420 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.215563 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.215700 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:36Z","lastTransitionTime":"2025-11-24T17:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.318001 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.318035 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.318046 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.318062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.318074 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:36Z","lastTransitionTime":"2025-11-24T17:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.420337 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.420389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.420440 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.420465 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.420516 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:36Z","lastTransitionTime":"2025-11-24T17:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.523401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.523468 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.523532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.523567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.523589 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:36Z","lastTransitionTime":"2025-11-24T17:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.626632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.626684 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.626704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.626733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.626757 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:36Z","lastTransitionTime":"2025-11-24T17:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.729265 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.729309 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.729324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.729345 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.729358 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:36Z","lastTransitionTime":"2025-11-24T17:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.831619 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.831737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.831781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.831816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.831839 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:36Z","lastTransitionTime":"2025-11-24T17:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.897743 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:36 crc kubenswrapper[4768]: E1124 17:50:36.897893 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.934302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.934338 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.934350 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.934364 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:36 crc kubenswrapper[4768]: I1124 17:50:36.934373 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:36Z","lastTransitionTime":"2025-11-24T17:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.037537 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.037588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.037615 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.037629 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.037639 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:37Z","lastTransitionTime":"2025-11-24T17:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.140653 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.140724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.140746 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.140772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.140793 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:37Z","lastTransitionTime":"2025-11-24T17:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.243584 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.243668 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.243691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.243721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.243744 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:37Z","lastTransitionTime":"2025-11-24T17:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.350306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.350355 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.350367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.350384 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.350395 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:37Z","lastTransitionTime":"2025-11-24T17:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.453766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.453806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.453815 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.453830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.453839 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:37Z","lastTransitionTime":"2025-11-24T17:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.556392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.556435 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.556445 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.556459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.556469 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:37Z","lastTransitionTime":"2025-11-24T17:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.658438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.658475 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.658511 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.658528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.658539 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:37Z","lastTransitionTime":"2025-11-24T17:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.761178 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.761219 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.761229 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.761244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.761255 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:37Z","lastTransitionTime":"2025-11-24T17:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.864561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.864616 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.864632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.864657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.864676 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:37Z","lastTransitionTime":"2025-11-24T17:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.897536 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.897633 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.897565 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:37 crc kubenswrapper[4768]: E1124 17:50:37.897748 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:37 crc kubenswrapper[4768]: E1124 17:50:37.897934 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:37 crc kubenswrapper[4768]: E1124 17:50:37.898126 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.967037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.967068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.967076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.967090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:37 crc kubenswrapper[4768]: I1124 17:50:37.967100 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:37Z","lastTransitionTime":"2025-11-24T17:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.069732 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.069777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.069785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.069800 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.069809 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.172599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.172669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.172693 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.172725 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.172746 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.275584 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.275617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.275626 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.275641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.275650 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.378600 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.378643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.378654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.378672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.378682 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.481005 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.481057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.481074 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.481090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.481102 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.583594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.583631 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.583641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.583657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.583667 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.640852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.640916 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.640940 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.640971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.640996 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: E1124 17:50:38.658636 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:38Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.662676 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.662712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.662722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.662737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.662748 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: E1124 17:50:38.680754 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:38Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.684425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.684464 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.684473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.684506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.684518 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: E1124 17:50:38.700623 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:38Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.704668 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.704728 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.704746 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.704775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.704799 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: E1124 17:50:38.722231 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:38Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.728373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.728436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.728457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.728482 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.728542 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: E1124 17:50:38.749762 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40e3ee2d-c39e-4d52-9a9e-87f50cf9c8f3\\\",\\\"systemUUID\\\":\\\"f215b4ef-9be9-4deb-ac5d-b54dee019f27\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:38Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:38 crc kubenswrapper[4768]: E1124 17:50:38.749974 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.751710 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.751761 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.751775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.751793 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.751805 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.854690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.854739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.854755 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.854778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.854795 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.897533 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:38 crc kubenswrapper[4768]: E1124 17:50:38.897708 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.914694 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.957442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.957547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.957584 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.957616 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:38 crc kubenswrapper[4768]: I1124 17:50:38.957639 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:38Z","lastTransitionTime":"2025-11-24T17:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.060036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.060292 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.060364 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.060439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.060536 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:39Z","lastTransitionTime":"2025-11-24T17:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.163369 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.163417 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.163435 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.163459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.163475 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:39Z","lastTransitionTime":"2025-11-24T17:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.265615 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.265712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.265742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.265773 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.265795 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:39Z","lastTransitionTime":"2025-11-24T17:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.368321 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.368356 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.368367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.368383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.368395 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:39Z","lastTransitionTime":"2025-11-24T17:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.471455 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.471696 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.471727 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.471757 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.471780 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:39Z","lastTransitionTime":"2025-11-24T17:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.574724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.574833 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.574845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.574870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.574889 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:39Z","lastTransitionTime":"2025-11-24T17:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.677564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.677618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.677627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.677643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.677656 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:39Z","lastTransitionTime":"2025-11-24T17:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.780772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.780831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.780846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.780865 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.780877 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:39Z","lastTransitionTime":"2025-11-24T17:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.884178 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.884254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.884289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.884316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.884335 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:39Z","lastTransitionTime":"2025-11-24T17:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.897345 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:39 crc kubenswrapper[4768]: E1124 17:50:39.897587 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.897691 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.897738 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:39 crc kubenswrapper[4768]: E1124 17:50:39.897892 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:39 crc kubenswrapper[4768]: E1124 17:50:39.898159 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.987545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.987598 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.987615 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.987639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:39 crc kubenswrapper[4768]: I1124 17:50:39.987656 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:39Z","lastTransitionTime":"2025-11-24T17:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.090669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.090740 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.090762 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.090793 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.090816 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:40Z","lastTransitionTime":"2025-11-24T17:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.193799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.193863 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.193881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.193908 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.193930 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:40Z","lastTransitionTime":"2025-11-24T17:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.297436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.297671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.297740 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.297769 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.297786 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:40Z","lastTransitionTime":"2025-11-24T17:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.399944 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.399995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.400012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.400035 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.400051 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:40Z","lastTransitionTime":"2025-11-24T17:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.502787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.502815 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.502824 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.502837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.502845 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:40Z","lastTransitionTime":"2025-11-24T17:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.605712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.605786 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.605797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.605816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.605828 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:40Z","lastTransitionTime":"2025-11-24T17:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.707925 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.707964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.707975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.707990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.708000 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:40Z","lastTransitionTime":"2025-11-24T17:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.810417 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.810513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.810528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.810549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.810563 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:40Z","lastTransitionTime":"2025-11-24T17:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.897605 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:40 crc kubenswrapper[4768]: E1124 17:50:40.898052 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.898342 4768 scope.go:117] "RemoveContainer" containerID="ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.914184 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.914256 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.914272 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.914297 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:40 crc kubenswrapper[4768]: I1124 17:50:40.914315 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:40Z","lastTransitionTime":"2025-11-24T17:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.018172 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.018227 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.018239 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.018260 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.018278 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:41Z","lastTransitionTime":"2025-11-24T17:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.121271 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.121309 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.121324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.121349 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.121365 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:41Z","lastTransitionTime":"2025-11-24T17:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.224017 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.224060 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.224072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.224090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.224103 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:41Z","lastTransitionTime":"2025-11-24T17:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.326885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.326919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.326929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.326943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.326951 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:41Z","lastTransitionTime":"2025-11-24T17:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.375810 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/2.log" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.379466 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"27c76ffb136717df22c456e7d03db2b8228eab2442df0a21f048d134e7fe7af8"} Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.379958 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.396475 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.418707 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.431808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.431880 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.431903 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.431932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.431950 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:41Z","lastTransitionTime":"2025-11-24T17:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.434747 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.454072 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"704291d8-e296-4e68-af25-2df125bbf5f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5594e7a35900cb3a27abf0b6b52c8c5eb5dc6073fde777591827aa0b263d1fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765297eaac3b23102363c5a20bb8ba2adfe61b234cd89efe9f4a990ca64f775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9efe128c5c465a5e97ed3999c845aaf99f54ce8f8f284ef94e862849c4bd1440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3b965619b00a1c06e5bbba266233972deaebef7329c7df8f9e8b281c15dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2638ec423b0ed84cb8f7fd9675411807c732a4bc0d6e7d225e7bc75d4eab440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://115aa8b11d06015e075ecd057cebfeb48e8b48dabf4dcde085db58e7c9bfef63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://115aa8b11d06015e075ecd057cebfeb48e8b48dabf4dcde085db58e7c9bfef63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aed504cdce697a67257909347234d1d268731cfd4788665702d9f1fefd81fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0aed504cdce697a67257909347234d1d268731cfd4788665702d9f1fefd81fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://41d7190571d28ba8a919c55ab72367fc821c76af3a484f2d846faf223b91ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41d7190571d28ba8a919c55ab72367fc821c76af3a484f2d846faf223b91ba10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.469224 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.482173 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.496329 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.515122 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.534457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.534513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.534523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.534543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.534554 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:41Z","lastTransitionTime":"2025-11-24T17:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.542902 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c76ffb136717df22c456e7d03db2b8228eab2442df0a21f048d134e7fe7af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:10Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831841 6396 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 17:50:10.831881 6396 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831943 6396 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 17:50:10.831982 6396 ovnkube.go:599] Stopped ovnkube\\\\nI1124 17:50:10.832011 6396 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 17:50:10.832099 6396 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:50:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.555631 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.572085 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.585542 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8d6db5-a1f0-4a91-96b7-636304d925db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5368b95b504f69098e8059eab5d10a29142319fedc02aa3421d2f133fa1dbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85638081d180f3f49a5865193eb7baf9777cafcbd197443feec23cc087f0e52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95daff4f04063e1c9db4e0dfc63a119a4ad136c47a453a705d4c481aaf03e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.601827 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.615507 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b190dd-915a-4160-adc8-5f7cea62aed8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fc36405a12358007f4b3e5aa6fd8cfa3d50864042eae28769c853b38e1a52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a463a138019bef8b5c936e83f9d0bd1713b4e2440cea5c8f21b80a7a329619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84a463a138019bef8b5c936e83f9d0bd1713b4e2440cea5c8f21b80a7a329619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.636828 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.637874 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.637936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.637954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.637992 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.638010 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:41Z","lastTransitionTime":"2025-11-24T17:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.654867 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344484ec32fe5f65cce2d4cb54a12496a32add2fb0a678735b23d75dacfd3ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:30Z\\\",\\\"message\\\":\\\"2025-11-24T17:49:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_243ebbd2-89b4-4c72-ab1d-3f4619d11370\\\\n2025-11-24T17:49:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_243ebbd2-89b4-4c72-ab1d-3f4619d11370 to /host/opt/cni/bin/\\\\n2025-11-24T17:49:45Z [verbose] multus-daemon started\\\\n2025-11-24T17:49:45Z [verbose] Readiness Indicator file check\\\\n2025-11-24T17:50:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.674362 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.690821 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.704730 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.740940 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.741025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.741052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.741086 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.741113 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:41Z","lastTransitionTime":"2025-11-24T17:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.843661 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.843694 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.843703 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.843718 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.843730 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:41Z","lastTransitionTime":"2025-11-24T17:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.897811 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:41 crc kubenswrapper[4768]: E1124 17:50:41.897956 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.898215 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:41 crc kubenswrapper[4768]: E1124 17:50:41.898313 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.898560 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:41 crc kubenswrapper[4768]: E1124 17:50:41.898634 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.914298 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b50668f2-0a0b-40f4-9a38-3df082cf931e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvrbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hpd8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.929122 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://201388673e38964677626f1794671042245f0b82ffc51d65406b50027b31f183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.939853 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://260ffa106175e178f65f4645c324c91c2ab34a3fe94ccd7e6541c7db8fed17a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.945631 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.945664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.945674 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.945689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.945699 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:41Z","lastTransitionTime":"2025-11-24T17:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.951070 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdbcm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"401a0505-4a0c-4407-a38d-fe41e14b4d2a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1079361b51823c504fff25fbc5e40f365abdae0f4f27ca51b08727f868ddfe95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv9ll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdbcm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.963305 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ad8fd839ca502f70ef66e016780e1556c5bc457015c8eef5ed5e68a9105c85c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z5sbg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ljwzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.980831 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6x87x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"733afdb8-b6a5-40b5-8164-5885baf3eceb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f23594156940300d1bf6b73029889619d8bf369f4f63ae805b96a9ea6ca8ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a27241ca288bfabce247de76fb11c243cc4a3e56632cd151f402dbb5b99788ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fd85252e6ddd46f9e2e41e67f1f6c5f3f216a811bece44395a9ee57615b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c5755577122e50cc4b4f0e3c4b3577a3ef1e9bec407a90dc2f49355ff7fea57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a00515ea8b09ec173e9c8e6a38058c7546ed63d09024de097bf8678e9ec9c19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e87ade7552a7b42fd430dc84f800f15ece3d8fb16b6b8d4d95955cb980f6c71f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://519f88c13cb8bedeeda53a91531fd04f2e8606ecfb1d2737e63e647cb3cf2098\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkz2q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6x87x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:41 crc kubenswrapper[4768]: I1124 17:50:41.998074 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1a20e-72eb-4519-a3fd-2b973853a250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb672a88df17613dbca084f61bf9e25ed9bc3447b12250daa985c15f34aa1609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f38a22cb9045e7a2e48fe0dd57c4fd11a8bf1e77d5870c414f48a10f5b93fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9nm7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:41Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.030239 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"704291d8-e296-4e68-af25-2df125bbf5f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5594e7a35900cb3a27abf0b6b52c8c5eb5dc6073fde777591827aa0b263d1fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8765297eaac3b23102363c5a20bb8ba2adfe61b234cd89efe9f4a990ca64f775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9efe128c5c465a5e97ed3999c845aaf99f54ce8f8f284ef94e862849c4bd1440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa3b965619b00a1c06e5bbba266233972deaebef7329c7df8f9e8b281c15dc7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2638ec423b0ed84cb8f7fd9675411807c732a4bc0d6e7d225e7bc75d4eab440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://115aa8b11d06015e075ecd057cebfeb48e8b48dabf4dcde085db58e7c9bfef63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://115aa8b11d06015e075ecd057cebfeb48e8b48dabf4dcde085db58e7c9bfef63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0aed504cdce697a67257909347234d1d268731cfd4788665702d9f1fefd81fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0aed504cdce697a67257909347234d1d268731cfd4788665702d9f1fefd81fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://41d7190571d28ba8a919c55ab72367fc821c76af3a484f2d846faf223b91ba10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41d7190571d28ba8a919c55ab72367fc821c76af3a484f2d846faf223b91ba10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.043712 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.047601 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.047708 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.047733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.047762 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.047781 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:42Z","lastTransitionTime":"2025-11-24T17:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.054983 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8d6db5-a1f0-4a91-96b7-636304d925db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5368b95b504f69098e8059eab5d10a29142319fedc02aa3421d2f133fa1dbee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85638081d180f3f49a5865193eb7baf9777cafcbd197443feec23cc087f0e52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95daff4f04063e1c9db4e0dfc63a119a4ad136c47a453a705d4c481aaf03e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96446ccced1c381ff1d08d54963d3808ce0517e50c32291efa12f5e9e983bd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.066818 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.079255 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e87823f86669e84a12b2678666cc2d861834971db9d05dc66a8792a665ede004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1751ce1c098b35835e033b78a6b9f9322298f0256def5bca78051d3879917538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.098413 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c76ffb136717df22c456e7d03db2b8228eab2442df0a21f048d134e7fe7af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:10Z\\\",\\\"message\\\":\\\"neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831841 6396 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 17:50:10.831881 6396 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 17:50:10.831943 6396 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 17:50:10.831982 6396 ovnkube.go:599] Stopped ovnkube\\\\nI1124 17:50:10.832011 6396 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 17:50:10.832099 6396 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:50:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dhc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2gjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.111922 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a19f376e-20c8-4a9b-a99b-98c72f2ef8d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e730e6e3bd7ee942ffc102ea5d7a2886c92d4059cd97717a3589ab44ed27d69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3194d28c5d8be58b848a00deec884d32278fdeb1bfba8c699b1d421c9e798b1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dfb69f1617f80fda2791092e61870583fc964936dad5284b8600792d491182e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ac23c612196007add7b434c601f9865c53a01a20b03b79d6701884ed565f35a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74ccecb74f827fa2f251c1b4146f329b3cf28f9e42a4b2bccfbc5201fed300ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c19b32d8919c40654a8e515bf107e7e90d38fedb6a16f07fbe09e00506e76802\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.124805 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d202559-f3ca-4aad-8af0-8ed72c6bf01b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3395ed51e7bdac8a7e1aa0ad6407b278d0fbf65949d53c63b1ae5bf9fed316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcbcfc3d0864e0ee0a23e5f9de2eeb61f2207753d7f50f423ae8e4458c21f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb94c3a12de71b18c4890da35e5135c659f9259c3d884e2b3c90c46e0679b65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e47b6f5ea5f97a582120385d546bcc0ce07b23d7d6e7432fb68ae4e3b37d7a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.141854 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vssnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895270a4-4f6a-4be4-9701-8a0f9cbf73d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://344484ec32fe5f65cce2d4cb54a12496a32add2fb0a678735b23d75dacfd3ea2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T17:50:30Z\\\",\\\"message\\\":\\\"2025-11-24T17:49:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_243ebbd2-89b4-4c72-ab1d-3f4619d11370\\\\n2025-11-24T17:49:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_243ebbd2-89b4-4c72-ab1d-3f4619d11370 to /host/opt/cni/bin/\\\\n2025-11-24T17:49:45Z [verbose] multus-daemon started\\\\n2025-11-24T17:49:45Z [verbose] Readiness Indicator file check\\\\n2025-11-24T17:50:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54hk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vssnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.150344 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.150384 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.150396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.150411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.150420 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:42Z","lastTransitionTime":"2025-11-24T17:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.153526 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ba241e-dd35-4128-a0e2-ee818cf1576f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff431b7bf67b94248ec223c7a813f36b60d32b9bb971f3393b4d135f811333b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb6z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.164801 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b190dd-915a-4160-adc8-5f7cea62aed8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fc36405a12358007f4b3e5aa6fd8cfa3d50864042eae28769c853b38e1a52e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T17:49:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84a463a138019bef8b5c936e83f9d0bd1713b4e2440cea5c8f21b80a7a329619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84a463a138019bef8b5c936e83f9d0bd1713b4e2440cea5c8f21b80a7a329619\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T17:49:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T17:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T17:49:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.176623 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T17:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T17:50:42Z is after 2025-08-24T17:21:41Z" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.254598 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.254644 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.254653 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.254670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.254683 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:42Z","lastTransitionTime":"2025-11-24T17:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.357641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.357946 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.358008 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.358080 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.358137 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:42Z","lastTransitionTime":"2025-11-24T17:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.477847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.478078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.478177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.478239 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.478293 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:42Z","lastTransitionTime":"2025-11-24T17:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.580794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.580860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.580885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.580913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.580934 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:42Z","lastTransitionTime":"2025-11-24T17:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.683734 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.683796 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.683808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.683830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.683845 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:42Z","lastTransitionTime":"2025-11-24T17:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.786964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.787033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.787055 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.787089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.787110 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:42Z","lastTransitionTime":"2025-11-24T17:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.889142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.889173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.889181 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.889193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.889202 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:42Z","lastTransitionTime":"2025-11-24T17:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.897795 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:42 crc kubenswrapper[4768]: E1124 17:50:42.897918 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.991026 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.991081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.991093 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.991107 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:42 crc kubenswrapper[4768]: I1124 17:50:42.991117 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:42Z","lastTransitionTime":"2025-11-24T17:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.094399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.094512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.094531 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.094561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.094580 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:43Z","lastTransitionTime":"2025-11-24T17:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.197339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.197379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.197389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.197403 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.197413 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:43Z","lastTransitionTime":"2025-11-24T17:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.301243 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.301303 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.301573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.301596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.301611 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:43Z","lastTransitionTime":"2025-11-24T17:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.386831 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/3.log" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.387922 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/2.log" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.391350 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="27c76ffb136717df22c456e7d03db2b8228eab2442df0a21f048d134e7fe7af8" exitCode=1 Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.391401 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"27c76ffb136717df22c456e7d03db2b8228eab2442df0a21f048d134e7fe7af8"} Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.391450 4768 scope.go:117] "RemoveContainer" containerID="ba59caae124be1832602d344aafade1cd61f33732f5dd63a91707afdbb57bdae" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.392645 4768 scope.go:117] "RemoveContainer" containerID="27c76ffb136717df22c456e7d03db2b8228eab2442df0a21f048d134e7fe7af8" Nov 24 17:50:43 crc kubenswrapper[4768]: E1124 17:50:43.392998 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.404600 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.404682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.404706 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.404739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.404764 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:43Z","lastTransitionTime":"2025-11-24T17:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.436144 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=63.43611457 podStartE2EDuration="1m3.43611457s" podCreationTimestamp="2025-11-24 17:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:50:43.418350703 +0000 UTC m=+82.278932500" watchObservedRunningTime="2025-11-24 17:50:43.43611457 +0000 UTC m=+82.296696367" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.436445 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=55.436439989 podStartE2EDuration="55.436439989s" podCreationTimestamp="2025-11-24 17:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:50:43.434991092 +0000 UTC m=+82.295572889" watchObservedRunningTime="2025-11-24 17:50:43.436439989 +0000 UTC m=+82.297021776" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.452565 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=31.452538034 podStartE2EDuration="31.452538034s" podCreationTimestamp="2025-11-24 17:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:50:43.450529202 +0000 UTC m=+82.311110979" watchObservedRunningTime="2025-11-24 17:50:43.452538034 +0000 UTC m=+82.313119811" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.507943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.508007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.508024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.508047 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.508064 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:43Z","lastTransitionTime":"2025-11-24T17:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.520271 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=14.52025201 podStartE2EDuration="14.52025201s" podCreationTimestamp="2025-11-24 17:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:50:43.520140697 +0000 UTC m=+82.380722484" watchObservedRunningTime="2025-11-24 17:50:43.52025201 +0000 UTC m=+82.380833787" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.547149 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-vssnl" podStartSLOduration=62.547129553 podStartE2EDuration="1m2.547129553s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:50:43.546577339 +0000 UTC m=+82.407159116" watchObservedRunningTime="2025-11-24 17:50:43.547129553 +0000 UTC m=+82.407711330" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.587789 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-m7zct" podStartSLOduration=62.5877666 podStartE2EDuration="1m2.5877666s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:50:43.556540315 +0000 UTC m=+82.417122112" watchObservedRunningTime="2025-11-24 17:50:43.5877666 +0000 UTC m=+82.448348387" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.610158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.610214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.610227 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.610243 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.610254 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:43Z","lastTransitionTime":"2025-11-24T17:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.685892 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.68586523 podStartE2EDuration="5.68586523s" podCreationTimestamp="2025-11-24 17:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:50:43.67032523 +0000 UTC m=+82.530907027" watchObservedRunningTime="2025-11-24 17:50:43.68586523 +0000 UTC m=+82.546447007" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.696389 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xdbcm" podStartSLOduration=62.696365631 podStartE2EDuration="1m2.696365631s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:50:43.696188846 +0000 UTC m=+82.556770623" watchObservedRunningTime="2025-11-24 17:50:43.696365631 +0000 UTC m=+82.556947408" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.707527 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podStartSLOduration=62.707473847 podStartE2EDuration="1m2.707473847s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:50:43.707087177 +0000 UTC m=+82.567668954" watchObservedRunningTime="2025-11-24 17:50:43.707473847 +0000 UTC m=+82.568055624" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.713075 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.713139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.713151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.713174 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.713187 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:43Z","lastTransitionTime":"2025-11-24T17:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.725579 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6x87x" podStartSLOduration=62.725551854 podStartE2EDuration="1m2.725551854s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:50:43.725092371 +0000 UTC m=+82.585674148" watchObservedRunningTime="2025-11-24 17:50:43.725551854 +0000 UTC m=+82.586133621" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.816043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.816121 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.816134 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.816150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.816163 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:43Z","lastTransitionTime":"2025-11-24T17:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.897582 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.897745 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:43 crc kubenswrapper[4768]: E1124 17:50:43.897847 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.897662 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:43 crc kubenswrapper[4768]: E1124 17:50:43.897996 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:43 crc kubenswrapper[4768]: E1124 17:50:43.898081 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.918381 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.918417 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.918425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.918439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:43 crc kubenswrapper[4768]: I1124 17:50:43.918448 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:43Z","lastTransitionTime":"2025-11-24T17:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.021785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.021840 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.021853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.021870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.021884 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:44Z","lastTransitionTime":"2025-11-24T17:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.124574 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.124614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.124623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.124641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.124653 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:44Z","lastTransitionTime":"2025-11-24T17:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.227180 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.227259 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.227269 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.227286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.227299 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:44Z","lastTransitionTime":"2025-11-24T17:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.330748 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.330800 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.330812 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.330830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.330846 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:44Z","lastTransitionTime":"2025-11-24T17:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.398228 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/3.log" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.433004 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.433048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.433060 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.433098 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.433114 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:44Z","lastTransitionTime":"2025-11-24T17:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.536030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.536077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.536088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.536105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.536117 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:44Z","lastTransitionTime":"2025-11-24T17:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.639594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.639860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.639972 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.640079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.640168 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:44Z","lastTransitionTime":"2025-11-24T17:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.742256 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.742289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.742298 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.742311 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.742320 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:44Z","lastTransitionTime":"2025-11-24T17:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.844953 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.844991 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.845007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.845030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.845045 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:44Z","lastTransitionTime":"2025-11-24T17:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.897412 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.897629 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.947436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.947670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.947735 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.947794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.947850 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:44Z","lastTransitionTime":"2025-11-24T17:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.974283 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.974368 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.974408 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.974428 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:44 crc kubenswrapper[4768]: I1124 17:50:44.974446 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974549 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974568 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974577 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974613 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.97460123 +0000 UTC m=+147.835183007 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974643 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.97463159 +0000 UTC m=+147.835213367 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974706 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974718 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974727 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974761 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.974754073 +0000 UTC m=+147.835335850 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974800 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974839 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974901 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.974871306 +0000 UTC m=+147.835453113 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 17:50:44 crc kubenswrapper[4768]: E1124 17:50:44.974987 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.974968029 +0000 UTC m=+147.835549936 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.051563 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.051941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.052149 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.052359 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.052597 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:45Z","lastTransitionTime":"2025-11-24T17:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.156039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.156079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.156087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.156101 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.156111 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:45Z","lastTransitionTime":"2025-11-24T17:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.259273 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.259347 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.259371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.259401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.259423 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:45Z","lastTransitionTime":"2025-11-24T17:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.362654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.362738 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.362772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.362803 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.362826 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:45Z","lastTransitionTime":"2025-11-24T17:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.465262 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.465312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.465324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.465343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.465355 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:45Z","lastTransitionTime":"2025-11-24T17:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.568632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.568707 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.568723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.568751 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.568772 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:45Z","lastTransitionTime":"2025-11-24T17:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.671936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.672006 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.672031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.672062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.672084 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:45Z","lastTransitionTime":"2025-11-24T17:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.774141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.774203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.774220 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.774243 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.774259 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:45Z","lastTransitionTime":"2025-11-24T17:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.877593 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.877693 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.877744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.877769 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.877788 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:45Z","lastTransitionTime":"2025-11-24T17:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.897376 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:45 crc kubenswrapper[4768]: E1124 17:50:45.897626 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.897658 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.897759 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:45 crc kubenswrapper[4768]: E1124 17:50:45.897900 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:45 crc kubenswrapper[4768]: E1124 17:50:45.898202 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.980472 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.980586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.980604 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.980632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:45 crc kubenswrapper[4768]: I1124 17:50:45.980651 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:45Z","lastTransitionTime":"2025-11-24T17:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.083642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.083694 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.083706 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.083726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.083739 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:46Z","lastTransitionTime":"2025-11-24T17:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.186842 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.186892 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.186907 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.186929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.186941 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:46Z","lastTransitionTime":"2025-11-24T17:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.289299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.289345 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.289357 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.289376 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.289389 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:46Z","lastTransitionTime":"2025-11-24T17:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.392318 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.392373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.392389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.392409 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.392422 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:46Z","lastTransitionTime":"2025-11-24T17:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.495472 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.495548 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.495566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.495589 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.495605 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:46Z","lastTransitionTime":"2025-11-24T17:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.598713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.598780 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.598802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.598831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.598854 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:46Z","lastTransitionTime":"2025-11-24T17:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.701022 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.701378 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.701403 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.701431 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.701452 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:46Z","lastTransitionTime":"2025-11-24T17:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.803725 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.803781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.803806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.803823 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.803831 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:46Z","lastTransitionTime":"2025-11-24T17:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.897940 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:46 crc kubenswrapper[4768]: E1124 17:50:46.898241 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.906930 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.906987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.907000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.907025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:46 crc kubenswrapper[4768]: I1124 17:50:46.907038 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:46Z","lastTransitionTime":"2025-11-24T17:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.009266 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.009302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.009312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.009327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.009337 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:47Z","lastTransitionTime":"2025-11-24T17:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.112657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.112699 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.112710 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.112727 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.112739 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:47Z","lastTransitionTime":"2025-11-24T17:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.216316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.216367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.216382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.216619 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.216636 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:47Z","lastTransitionTime":"2025-11-24T17:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.319281 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.319339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.319350 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.319368 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.319379 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:47Z","lastTransitionTime":"2025-11-24T17:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.421153 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.421203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.421215 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.421230 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.421246 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:47Z","lastTransitionTime":"2025-11-24T17:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.524460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.524515 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.524524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.524540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.524549 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:47Z","lastTransitionTime":"2025-11-24T17:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.627226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.627264 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.627273 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.627286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.627296 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:47Z","lastTransitionTime":"2025-11-24T17:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.729556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.729638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.729662 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.729691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.729712 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:47Z","lastTransitionTime":"2025-11-24T17:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.832013 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.832050 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.832060 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.832075 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.832091 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:47Z","lastTransitionTime":"2025-11-24T17:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.897968 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.898017 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.898198 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:47 crc kubenswrapper[4768]: E1124 17:50:47.898185 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:47 crc kubenswrapper[4768]: E1124 17:50:47.898330 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:47 crc kubenswrapper[4768]: E1124 17:50:47.898442 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.935467 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.935582 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.935605 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.935633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:47 crc kubenswrapper[4768]: I1124 17:50:47.935653 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:47Z","lastTransitionTime":"2025-11-24T17:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.038915 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.038974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.038991 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.039015 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.039032 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:48Z","lastTransitionTime":"2025-11-24T17:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.142065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.142109 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.142148 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.142165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.142178 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:48Z","lastTransitionTime":"2025-11-24T17:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.245070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.245137 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.245159 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.245190 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.245215 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:48Z","lastTransitionTime":"2025-11-24T17:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.348851 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.348924 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.348945 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.348974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.348994 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:48Z","lastTransitionTime":"2025-11-24T17:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.452646 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.452719 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.452731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.452788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.452806 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:48Z","lastTransitionTime":"2025-11-24T17:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.555568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.555624 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.555640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.555661 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.555675 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:48Z","lastTransitionTime":"2025-11-24T17:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.659739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.659782 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.659790 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.659806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.659816 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:48Z","lastTransitionTime":"2025-11-24T17:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.762133 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.762172 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.762185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.762205 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.762216 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:48Z","lastTransitionTime":"2025-11-24T17:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.865122 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.865234 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.865257 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.865283 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.865300 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:48Z","lastTransitionTime":"2025-11-24T17:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.897367 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:48 crc kubenswrapper[4768]: E1124 17:50:48.897539 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.899269 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.899313 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.899325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.899344 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.899356 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T17:50:48Z","lastTransitionTime":"2025-11-24T17:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.958582 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9nm7w" podStartSLOduration=67.958561192 podStartE2EDuration="1m7.958561192s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:50:43.739317319 +0000 UTC m=+82.599899096" watchObservedRunningTime="2025-11-24 17:50:48.958561192 +0000 UTC m=+87.819142979" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.959774 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6"] Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.960193 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.961674 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.962082 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.962360 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 17:50:48 crc kubenswrapper[4768]: I1124 17:50:48.962538 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.121279 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8285a013-5299-40d4-b39c-31a7a30ef812-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.121360 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8285a013-5299-40d4-b39c-31a7a30ef812-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.121408 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8285a013-5299-40d4-b39c-31a7a30ef812-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.121456 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8285a013-5299-40d4-b39c-31a7a30ef812-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.121611 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8285a013-5299-40d4-b39c-31a7a30ef812-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.223167 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8285a013-5299-40d4-b39c-31a7a30ef812-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.223246 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8285a013-5299-40d4-b39c-31a7a30ef812-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.223306 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8285a013-5299-40d4-b39c-31a7a30ef812-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.223317 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8285a013-5299-40d4-b39c-31a7a30ef812-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.223370 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8285a013-5299-40d4-b39c-31a7a30ef812-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.223421 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8285a013-5299-40d4-b39c-31a7a30ef812-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.223593 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8285a013-5299-40d4-b39c-31a7a30ef812-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.225188 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8285a013-5299-40d4-b39c-31a7a30ef812-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.233268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8285a013-5299-40d4-b39c-31a7a30ef812-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.251244 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8285a013-5299-40d4-b39c-31a7a30ef812-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5t6d6\" (UID: \"8285a013-5299-40d4-b39c-31a7a30ef812\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.279412 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" Nov 24 17:50:49 crc kubenswrapper[4768]: W1124 17:50:49.299317 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8285a013_5299_40d4_b39c_31a7a30ef812.slice/crio-c8fa0b666ed638b1f71f2cde9aa26ac9faf880a350da84953c42b4075a86448f WatchSource:0}: Error finding container c8fa0b666ed638b1f71f2cde9aa26ac9faf880a350da84953c42b4075a86448f: Status 404 returned error can't find the container with id c8fa0b666ed638b1f71f2cde9aa26ac9faf880a350da84953c42b4075a86448f Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.418593 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" event={"ID":"8285a013-5299-40d4-b39c-31a7a30ef812","Type":"ContainerStarted","Data":"c8fa0b666ed638b1f71f2cde9aa26ac9faf880a350da84953c42b4075a86448f"} Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.898373 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:49 crc kubenswrapper[4768]: E1124 17:50:49.899091 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.898884 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:49 crc kubenswrapper[4768]: E1124 17:50:49.899277 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:49 crc kubenswrapper[4768]: I1124 17:50:49.898843 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:49 crc kubenswrapper[4768]: E1124 17:50:49.899345 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:50 crc kubenswrapper[4768]: I1124 17:50:50.423315 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" event={"ID":"8285a013-5299-40d4-b39c-31a7a30ef812","Type":"ContainerStarted","Data":"e106cd8dd91a23fbc98016a6989f3d13186e47b324302c83e206a4dd73ef1c13"} Nov 24 17:50:50 crc kubenswrapper[4768]: I1124 17:50:50.439064 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5t6d6" podStartSLOduration=69.439044564 podStartE2EDuration="1m9.439044564s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:50:50.437832473 +0000 UTC m=+89.298414270" watchObservedRunningTime="2025-11-24 17:50:50.439044564 +0000 UTC m=+89.299626371" Nov 24 17:50:50 crc kubenswrapper[4768]: I1124 17:50:50.897856 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:50 crc kubenswrapper[4768]: E1124 17:50:50.897981 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:51 crc kubenswrapper[4768]: I1124 17:50:51.897764 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:51 crc kubenswrapper[4768]: I1124 17:50:51.897937 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:51 crc kubenswrapper[4768]: E1124 17:50:51.900073 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:51 crc kubenswrapper[4768]: I1124 17:50:51.900164 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:51 crc kubenswrapper[4768]: E1124 17:50:51.900324 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:51 crc kubenswrapper[4768]: E1124 17:50:51.900629 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:52 crc kubenswrapper[4768]: I1124 17:50:52.897958 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:52 crc kubenswrapper[4768]: E1124 17:50:52.898077 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:53 crc kubenswrapper[4768]: I1124 17:50:53.898195 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:53 crc kubenswrapper[4768]: I1124 17:50:53.898235 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:53 crc kubenswrapper[4768]: E1124 17:50:53.898303 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:53 crc kubenswrapper[4768]: E1124 17:50:53.898364 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:53 crc kubenswrapper[4768]: I1124 17:50:53.898758 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:53 crc kubenswrapper[4768]: E1124 17:50:53.898881 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:54 crc kubenswrapper[4768]: I1124 17:50:54.898262 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:54 crc kubenswrapper[4768]: E1124 17:50:54.898782 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:54 crc kubenswrapper[4768]: I1124 17:50:54.898954 4768 scope.go:117] "RemoveContainer" containerID="27c76ffb136717df22c456e7d03db2b8228eab2442df0a21f048d134e7fe7af8" Nov 24 17:50:54 crc kubenswrapper[4768]: E1124 17:50:54.899152 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" Nov 24 17:50:55 crc kubenswrapper[4768]: I1124 17:50:55.898278 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:55 crc kubenswrapper[4768]: I1124 17:50:55.898328 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:55 crc kubenswrapper[4768]: I1124 17:50:55.898333 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:55 crc kubenswrapper[4768]: E1124 17:50:55.898405 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:55 crc kubenswrapper[4768]: E1124 17:50:55.898539 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:55 crc kubenswrapper[4768]: E1124 17:50:55.898626 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:56 crc kubenswrapper[4768]: I1124 17:50:56.898255 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:56 crc kubenswrapper[4768]: E1124 17:50:56.899011 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:57 crc kubenswrapper[4768]: I1124 17:50:57.898239 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:57 crc kubenswrapper[4768]: E1124 17:50:57.898667 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:57 crc kubenswrapper[4768]: I1124 17:50:57.898366 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:57 crc kubenswrapper[4768]: E1124 17:50:57.899396 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:50:57 crc kubenswrapper[4768]: I1124 17:50:57.898344 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:57 crc kubenswrapper[4768]: E1124 17:50:57.899598 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:58 crc kubenswrapper[4768]: I1124 17:50:58.897474 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:58 crc kubenswrapper[4768]: E1124 17:50:58.897653 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:50:59 crc kubenswrapper[4768]: I1124 17:50:59.226873 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:50:59 crc kubenswrapper[4768]: E1124 17:50:59.227074 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:50:59 crc kubenswrapper[4768]: E1124 17:50:59.227223 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs podName:b50668f2-0a0b-40f4-9a38-3df082cf931e nodeName:}" failed. No retries permitted until 2025-11-24 17:52:03.227184436 +0000 UTC m=+162.087766253 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs") pod "network-metrics-daemon-hpd8h" (UID: "b50668f2-0a0b-40f4-9a38-3df082cf931e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 17:50:59 crc kubenswrapper[4768]: I1124 17:50:59.897884 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:50:59 crc kubenswrapper[4768]: I1124 17:50:59.897932 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:50:59 crc kubenswrapper[4768]: I1124 17:50:59.897961 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:50:59 crc kubenswrapper[4768]: E1124 17:50:59.898073 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:50:59 crc kubenswrapper[4768]: E1124 17:50:59.898220 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:50:59 crc kubenswrapper[4768]: E1124 17:50:59.898334 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:00 crc kubenswrapper[4768]: I1124 17:51:00.897292 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:00 crc kubenswrapper[4768]: E1124 17:51:00.897463 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:01 crc kubenswrapper[4768]: I1124 17:51:01.898433 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:01 crc kubenswrapper[4768]: I1124 17:51:01.898474 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:01 crc kubenswrapper[4768]: I1124 17:51:01.898551 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:01 crc kubenswrapper[4768]: E1124 17:51:01.898679 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:01 crc kubenswrapper[4768]: E1124 17:51:01.899991 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:01 crc kubenswrapper[4768]: E1124 17:51:01.900049 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:02 crc kubenswrapper[4768]: I1124 17:51:02.897833 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:02 crc kubenswrapper[4768]: E1124 17:51:02.897991 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:03 crc kubenswrapper[4768]: I1124 17:51:03.897981 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:03 crc kubenswrapper[4768]: I1124 17:51:03.897982 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:03 crc kubenswrapper[4768]: I1124 17:51:03.898194 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:03 crc kubenswrapper[4768]: E1124 17:51:03.898126 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:03 crc kubenswrapper[4768]: E1124 17:51:03.898300 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:03 crc kubenswrapper[4768]: E1124 17:51:03.898333 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:04 crc kubenswrapper[4768]: I1124 17:51:04.897946 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:04 crc kubenswrapper[4768]: E1124 17:51:04.898157 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:05 crc kubenswrapper[4768]: I1124 17:51:05.897640 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:05 crc kubenswrapper[4768]: I1124 17:51:05.897764 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:05 crc kubenswrapper[4768]: E1124 17:51:05.897885 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:05 crc kubenswrapper[4768]: I1124 17:51:05.897920 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:05 crc kubenswrapper[4768]: E1124 17:51:05.897982 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:05 crc kubenswrapper[4768]: E1124 17:51:05.898079 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:06 crc kubenswrapper[4768]: I1124 17:51:06.897775 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:06 crc kubenswrapper[4768]: E1124 17:51:06.898583 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:07 crc kubenswrapper[4768]: I1124 17:51:07.898303 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:07 crc kubenswrapper[4768]: I1124 17:51:07.898405 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:07 crc kubenswrapper[4768]: I1124 17:51:07.898324 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:07 crc kubenswrapper[4768]: E1124 17:51:07.898567 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:07 crc kubenswrapper[4768]: E1124 17:51:07.898808 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:07 crc kubenswrapper[4768]: E1124 17:51:07.898982 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:08 crc kubenswrapper[4768]: I1124 17:51:08.898236 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:08 crc kubenswrapper[4768]: E1124 17:51:08.898989 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:08 crc kubenswrapper[4768]: I1124 17:51:08.899073 4768 scope.go:117] "RemoveContainer" containerID="27c76ffb136717df22c456e7d03db2b8228eab2442df0a21f048d134e7fe7af8" Nov 24 17:51:08 crc kubenswrapper[4768]: E1124 17:51:08.899927 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w2gjr_openshift-ovn-kubernetes(938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" Nov 24 17:51:09 crc kubenswrapper[4768]: I1124 17:51:09.898323 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:09 crc kubenswrapper[4768]: E1124 17:51:09.898708 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:09 crc kubenswrapper[4768]: I1124 17:51:09.898802 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:09 crc kubenswrapper[4768]: I1124 17:51:09.898752 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:09 crc kubenswrapper[4768]: E1124 17:51:09.899104 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:09 crc kubenswrapper[4768]: E1124 17:51:09.899217 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:10 crc kubenswrapper[4768]: I1124 17:51:10.898111 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:10 crc kubenswrapper[4768]: E1124 17:51:10.898255 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:11 crc kubenswrapper[4768]: I1124 17:51:11.897949 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:11 crc kubenswrapper[4768]: I1124 17:51:11.898016 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:11 crc kubenswrapper[4768]: I1124 17:51:11.898754 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:11 crc kubenswrapper[4768]: E1124 17:51:11.899002 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:11 crc kubenswrapper[4768]: E1124 17:51:11.899174 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:11 crc kubenswrapper[4768]: E1124 17:51:11.899144 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:12 crc kubenswrapper[4768]: I1124 17:51:12.898150 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:12 crc kubenswrapper[4768]: E1124 17:51:12.898859 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:13 crc kubenswrapper[4768]: I1124 17:51:13.897589 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:13 crc kubenswrapper[4768]: E1124 17:51:13.897724 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:13 crc kubenswrapper[4768]: I1124 17:51:13.897740 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:13 crc kubenswrapper[4768]: E1124 17:51:13.897813 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:13 crc kubenswrapper[4768]: I1124 17:51:13.898094 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:13 crc kubenswrapper[4768]: E1124 17:51:13.898167 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:14 crc kubenswrapper[4768]: I1124 17:51:14.898082 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:14 crc kubenswrapper[4768]: E1124 17:51:14.898205 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:15 crc kubenswrapper[4768]: I1124 17:51:15.900679 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:15 crc kubenswrapper[4768]: E1124 17:51:15.901153 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:15 crc kubenswrapper[4768]: I1124 17:51:15.900742 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:15 crc kubenswrapper[4768]: E1124 17:51:15.901272 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:15 crc kubenswrapper[4768]: I1124 17:51:15.900691 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:15 crc kubenswrapper[4768]: E1124 17:51:15.901433 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:16 crc kubenswrapper[4768]: I1124 17:51:16.501851 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vssnl_895270a4-4f6a-4be4-9701-8a0f9cbf73d7/kube-multus/1.log" Nov 24 17:51:16 crc kubenswrapper[4768]: I1124 17:51:16.502313 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vssnl_895270a4-4f6a-4be4-9701-8a0f9cbf73d7/kube-multus/0.log" Nov 24 17:51:16 crc kubenswrapper[4768]: I1124 17:51:16.502369 4768 generic.go:334] "Generic (PLEG): container finished" podID="895270a4-4f6a-4be4-9701-8a0f9cbf73d7" containerID="344484ec32fe5f65cce2d4cb54a12496a32add2fb0a678735b23d75dacfd3ea2" exitCode=1 Nov 24 17:51:16 crc kubenswrapper[4768]: I1124 17:51:16.502411 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vssnl" event={"ID":"895270a4-4f6a-4be4-9701-8a0f9cbf73d7","Type":"ContainerDied","Data":"344484ec32fe5f65cce2d4cb54a12496a32add2fb0a678735b23d75dacfd3ea2"} Nov 24 17:51:16 crc kubenswrapper[4768]: I1124 17:51:16.502460 4768 scope.go:117] "RemoveContainer" containerID="e01746327b3250bd68d5a3c8c3b26be0f7f726dd26fc4851b49bb322ca1eb462" Nov 24 17:51:16 crc kubenswrapper[4768]: I1124 17:51:16.502818 4768 scope.go:117] "RemoveContainer" containerID="344484ec32fe5f65cce2d4cb54a12496a32add2fb0a678735b23d75dacfd3ea2" Nov 24 17:51:16 crc kubenswrapper[4768]: E1124 17:51:16.502982 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-vssnl_openshift-multus(895270a4-4f6a-4be4-9701-8a0f9cbf73d7)\"" pod="openshift-multus/multus-vssnl" podUID="895270a4-4f6a-4be4-9701-8a0f9cbf73d7" Nov 24 17:51:16 crc kubenswrapper[4768]: I1124 17:51:16.897592 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:16 crc kubenswrapper[4768]: E1124 17:51:16.897769 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:17 crc kubenswrapper[4768]: I1124 17:51:17.507640 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vssnl_895270a4-4f6a-4be4-9701-8a0f9cbf73d7/kube-multus/1.log" Nov 24 17:51:17 crc kubenswrapper[4768]: I1124 17:51:17.897746 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:17 crc kubenswrapper[4768]: I1124 17:51:17.897802 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:17 crc kubenswrapper[4768]: I1124 17:51:17.897821 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:17 crc kubenswrapper[4768]: E1124 17:51:17.897870 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:17 crc kubenswrapper[4768]: E1124 17:51:17.897995 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:17 crc kubenswrapper[4768]: E1124 17:51:17.898132 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:18 crc kubenswrapper[4768]: I1124 17:51:18.897643 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:18 crc kubenswrapper[4768]: E1124 17:51:18.897792 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:19 crc kubenswrapper[4768]: I1124 17:51:19.898120 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:19 crc kubenswrapper[4768]: E1124 17:51:19.898288 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:19 crc kubenswrapper[4768]: I1124 17:51:19.898121 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:19 crc kubenswrapper[4768]: E1124 17:51:19.898532 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:19 crc kubenswrapper[4768]: I1124 17:51:19.898831 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:19 crc kubenswrapper[4768]: E1124 17:51:19.898929 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:20 crc kubenswrapper[4768]: I1124 17:51:20.897404 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:20 crc kubenswrapper[4768]: E1124 17:51:20.897630 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:21 crc kubenswrapper[4768]: E1124 17:51:21.876030 4768 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 24 17:51:21 crc kubenswrapper[4768]: I1124 17:51:21.897774 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:21 crc kubenswrapper[4768]: E1124 17:51:21.900324 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:21 crc kubenswrapper[4768]: I1124 17:51:21.900545 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:21 crc kubenswrapper[4768]: I1124 17:51:21.900623 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:21 crc kubenswrapper[4768]: E1124 17:51:21.900696 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:21 crc kubenswrapper[4768]: E1124 17:51:21.900760 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:21 crc kubenswrapper[4768]: E1124 17:51:21.998678 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 17:51:22 crc kubenswrapper[4768]: I1124 17:51:22.898330 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:22 crc kubenswrapper[4768]: E1124 17:51:22.898885 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:22 crc kubenswrapper[4768]: I1124 17:51:22.899169 4768 scope.go:117] "RemoveContainer" containerID="27c76ffb136717df22c456e7d03db2b8228eab2442df0a21f048d134e7fe7af8" Nov 24 17:51:23 crc kubenswrapper[4768]: I1124 17:51:23.538191 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/3.log" Nov 24 17:51:23 crc kubenswrapper[4768]: I1124 17:51:23.540968 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerStarted","Data":"a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831"} Nov 24 17:51:23 crc kubenswrapper[4768]: I1124 17:51:23.542439 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:51:23 crc kubenswrapper[4768]: I1124 17:51:23.584900 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podStartSLOduration=102.584881748 podStartE2EDuration="1m42.584881748s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:23.584093528 +0000 UTC m=+122.444675305" watchObservedRunningTime="2025-11-24 17:51:23.584881748 +0000 UTC m=+122.445463535" Nov 24 17:51:23 crc kubenswrapper[4768]: I1124 17:51:23.901060 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:23 crc kubenswrapper[4768]: I1124 17:51:23.901160 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:23 crc kubenswrapper[4768]: I1124 17:51:23.901076 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:23 crc kubenswrapper[4768]: E1124 17:51:23.901202 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:23 crc kubenswrapper[4768]: E1124 17:51:23.901284 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:23 crc kubenswrapper[4768]: E1124 17:51:23.901364 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:23 crc kubenswrapper[4768]: I1124 17:51:23.965363 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hpd8h"] Nov 24 17:51:23 crc kubenswrapper[4768]: I1124 17:51:23.965515 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:23 crc kubenswrapper[4768]: E1124 17:51:23.965620 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:25 crc kubenswrapper[4768]: I1124 17:51:25.898009 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:25 crc kubenswrapper[4768]: I1124 17:51:25.898062 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:25 crc kubenswrapper[4768]: I1124 17:51:25.898108 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:25 crc kubenswrapper[4768]: I1124 17:51:25.898018 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:25 crc kubenswrapper[4768]: E1124 17:51:25.898152 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:25 crc kubenswrapper[4768]: E1124 17:51:25.898269 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:25 crc kubenswrapper[4768]: E1124 17:51:25.898365 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:25 crc kubenswrapper[4768]: E1124 17:51:25.898420 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:27 crc kubenswrapper[4768]: E1124 17:51:27.000185 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 17:51:27 crc kubenswrapper[4768]: I1124 17:51:27.898100 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:27 crc kubenswrapper[4768]: I1124 17:51:27.898186 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:27 crc kubenswrapper[4768]: I1124 17:51:27.898123 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:27 crc kubenswrapper[4768]: I1124 17:51:27.898123 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:27 crc kubenswrapper[4768]: E1124 17:51:27.898292 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:27 crc kubenswrapper[4768]: E1124 17:51:27.898610 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:27 crc kubenswrapper[4768]: E1124 17:51:27.898706 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:27 crc kubenswrapper[4768]: E1124 17:51:27.898762 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:29 crc kubenswrapper[4768]: I1124 17:51:29.897650 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:29 crc kubenswrapper[4768]: I1124 17:51:29.897722 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:29 crc kubenswrapper[4768]: I1124 17:51:29.897671 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:29 crc kubenswrapper[4768]: E1124 17:51:29.897837 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:29 crc kubenswrapper[4768]: E1124 17:51:29.898051 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:29 crc kubenswrapper[4768]: E1124 17:51:29.898256 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:29 crc kubenswrapper[4768]: I1124 17:51:29.898297 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:29 crc kubenswrapper[4768]: E1124 17:51:29.898476 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:29 crc kubenswrapper[4768]: I1124 17:51:29.899240 4768 scope.go:117] "RemoveContainer" containerID="344484ec32fe5f65cce2d4cb54a12496a32add2fb0a678735b23d75dacfd3ea2" Nov 24 17:51:30 crc kubenswrapper[4768]: I1124 17:51:30.566206 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vssnl_895270a4-4f6a-4be4-9701-8a0f9cbf73d7/kube-multus/1.log" Nov 24 17:51:30 crc kubenswrapper[4768]: I1124 17:51:30.566682 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vssnl" event={"ID":"895270a4-4f6a-4be4-9701-8a0f9cbf73d7","Type":"ContainerStarted","Data":"7cd36c7ee341731a5eab683195734326510c57c98fea98906e0139f89383ce09"} Nov 24 17:51:31 crc kubenswrapper[4768]: I1124 17:51:31.897709 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:31 crc kubenswrapper[4768]: I1124 17:51:31.897766 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:31 crc kubenswrapper[4768]: I1124 17:51:31.897689 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:31 crc kubenswrapper[4768]: I1124 17:51:31.897835 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:31 crc kubenswrapper[4768]: E1124 17:51:31.899112 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 17:51:31 crc kubenswrapper[4768]: E1124 17:51:31.899300 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 17:51:31 crc kubenswrapper[4768]: E1124 17:51:31.899409 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hpd8h" podUID="b50668f2-0a0b-40f4-9a38-3df082cf931e" Nov 24 17:51:31 crc kubenswrapper[4768]: E1124 17:51:31.899466 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 17:51:33 crc kubenswrapper[4768]: I1124 17:51:33.898400 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:33 crc kubenswrapper[4768]: I1124 17:51:33.898443 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:33 crc kubenswrapper[4768]: I1124 17:51:33.898390 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:51:33 crc kubenswrapper[4768]: I1124 17:51:33.898400 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:33 crc kubenswrapper[4768]: I1124 17:51:33.901777 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 17:51:33 crc kubenswrapper[4768]: I1124 17:51:33.901838 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 17:51:33 crc kubenswrapper[4768]: I1124 17:51:33.901854 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 17:51:33 crc kubenswrapper[4768]: I1124 17:51:33.901950 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 17:51:33 crc kubenswrapper[4768]: I1124 17:51:33.901967 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 17:51:33 crc kubenswrapper[4768]: I1124 17:51:33.902324 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.696426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.742377 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mwfrc"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.748171 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.748318 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.748908 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xxlhx"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.750273 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.752030 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.752417 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5sdcl"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.765257 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.766076 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.771000 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.771806 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.772220 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.772425 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.772423 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.772601 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.773587 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.773813 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.773944 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.774316 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.774687 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.775198 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.775833 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.776325 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.776557 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.776571 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lbcxh"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.776708 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.776787 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.776921 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.777380 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.777819 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.777985 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.779197 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.779281 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.779836 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.779904 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.780047 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.780249 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.780358 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.780518 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.780670 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.780707 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.780604 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.780791 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.780924 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.780934 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.781291 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.781950 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.784141 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.784576 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.784871 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.785055 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fgt8t"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.785576 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.785724 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fgt8t" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.785916 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.785995 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.786187 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.786942 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.786973 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.786975 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.788099 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.788755 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.788918 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.789048 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.789171 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.796505 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v8v5f"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.797323 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-745nn"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.797405 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.800771 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.805645 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.806061 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gx45l"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.806116 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.806594 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gx45l" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.807228 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.817689 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.817927 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.818091 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.818222 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.818337 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.834954 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.835209 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.835328 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.835422 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.835532 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.835629 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.835718 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.835808 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.835906 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.836166 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.836386 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.836739 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.837173 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.837241 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.837458 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.837603 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.850551 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.850726 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.850896 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-tj982"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.851413 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.852322 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.853173 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.853862 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.854247 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.854621 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.854867 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855051 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855125 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855172 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855302 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855338 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855426 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855576 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855590 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855609 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855659 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855784 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855896 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.855926 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.857202 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nxw22"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.857718 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.858974 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.859335 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.859391 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.859576 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.873054 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.873379 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.873915 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874254 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874283 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaea92fe-c8a2-45e7-892e-e7897060eae4-config\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874306 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e790bb9a-6948-438a-8d6e-b8a9db1e2aa9-config\") pod \"console-operator-58897d9998-lbcxh\" (UID: \"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9\") " pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874322 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4312574-3ae8-49f4-a799-e20198b71149-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xxlhx\" (UID: \"f4312574-3ae8-49f4-a799-e20198b71149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874348 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/399d5dbd-8565-4557-b593-f7c1ca2abcf5-etcd-client\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874390 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28stp\" (UniqueName: \"kubernetes.io/projected/6f01642d-b03b-4448-9152-9285d7ca0a6c-kube-api-access-28stp\") pod \"openshift-apiserver-operator-796bbdcf4f-7hgjk\" (UID: \"6f01642d-b03b-4448-9152-9285d7ca0a6c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874408 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874424 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/399d5dbd-8565-4557-b593-f7c1ca2abcf5-audit-dir\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874451 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkgts\" (UniqueName: \"kubernetes.io/projected/f4312574-3ae8-49f4-a799-e20198b71149-kube-api-access-mkgts\") pod \"machine-api-operator-5694c8668f-xxlhx\" (UID: \"f4312574-3ae8-49f4-a799-e20198b71149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874469 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/399d5dbd-8565-4557-b593-f7c1ca2abcf5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874503 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e790bb9a-6948-438a-8d6e-b8a9db1e2aa9-serving-cert\") pod \"console-operator-58897d9998-lbcxh\" (UID: \"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9\") " pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874525 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/809a0417-e4ae-4f20-b068-90d7ce5f8617-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fjj99\" (UID: \"809a0417-e4ae-4f20-b068-90d7ce5f8617\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874539 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-client-ca\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874560 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f4312574-3ae8-49f4-a799-e20198b71149-images\") pod \"machine-api-operator-5694c8668f-xxlhx\" (UID: \"f4312574-3ae8-49f4-a799-e20198b71149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874574 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e322e474-b6fd-43ec-a7f4-8680a5b02172-serving-cert\") pod \"openshift-config-operator-7777fb866f-9tpf2\" (UID: \"e322e474-b6fd-43ec-a7f4-8680a5b02172\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874589 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fs2n\" (UniqueName: \"kubernetes.io/projected/097861b9-f639-4e44-a54e-ae798f106ef0-kube-api-access-8fs2n\") pod \"route-controller-manager-6576b87f9c-p4n49\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874603 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaea92fe-c8a2-45e7-892e-e7897060eae4-service-ca-bundle\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874624 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/097861b9-f639-4e44-a54e-ae798f106ef0-serving-cert\") pod \"route-controller-manager-6576b87f9c-p4n49\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874635 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874736 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874639 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaea92fe-c8a2-45e7-892e-e7897060eae4-serving-cert\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874946 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhv2t\" (UniqueName: \"kubernetes.io/projected/745e1125-670f-4e6e-acf0-e1206cf06a8e-kube-api-access-vhv2t\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874978 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f01642d-b03b-4448-9152-9285d7ca0a6c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7hgjk\" (UID: \"6f01642d-b03b-4448-9152-9285d7ca0a6c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.874995 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/097861b9-f639-4e44-a54e-ae798f106ef0-client-ca\") pod \"route-controller-manager-6576b87f9c-p4n49\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875035 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-image-import-ca\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875053 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-config\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875069 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/399d5dbd-8565-4557-b593-f7c1ca2abcf5-serving-cert\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875090 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaea92fe-c8a2-45e7-892e-e7897060eae4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875106 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sncjx\" (UniqueName: \"kubernetes.io/projected/df08e410-ea02-4bf7-8330-d0530b2c08b5-kube-api-access-sncjx\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875128 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875129 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/745e1125-670f-4e6e-acf0-e1206cf06a8e-node-pullsecrets\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875440 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drshw\" (UniqueName: \"kubernetes.io/projected/e790bb9a-6948-438a-8d6e-b8a9db1e2aa9-kube-api-access-drshw\") pod \"console-operator-58897d9998-lbcxh\" (UID: \"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9\") " pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875461 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg6d7\" (UniqueName: \"kubernetes.io/projected/eaea92fe-c8a2-45e7-892e-e7897060eae4-kube-api-access-gg6d7\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875539 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e322e474-b6fd-43ec-a7f4-8680a5b02172-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9tpf2\" (UID: \"e322e474-b6fd-43ec-a7f4-8680a5b02172\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875560 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df08e410-ea02-4bf7-8330-d0530b2c08b5-serving-cert\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875612 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/399d5dbd-8565-4557-b593-f7c1ca2abcf5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875645 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w69k\" (UniqueName: \"kubernetes.io/projected/73cd8533-3450-46e3-89b9-6dd092750ef9-kube-api-access-7w69k\") pod \"downloads-7954f5f757-fgt8t\" (UID: \"73cd8533-3450-46e3-89b9-6dd092750ef9\") " pod="openshift-console/downloads-7954f5f757-fgt8t" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875695 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-audit\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875711 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/745e1125-670f-4e6e-acf0-e1206cf06a8e-serving-cert\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875759 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/399d5dbd-8565-4557-b593-f7c1ca2abcf5-audit-policies\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875776 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/399d5dbd-8565-4557-b593-f7c1ca2abcf5-encryption-config\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875796 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-config\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875838 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097861b9-f639-4e44-a54e-ae798f106ef0-config\") pod \"route-controller-manager-6576b87f9c-p4n49\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875855 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e790bb9a-6948-438a-8d6e-b8a9db1e2aa9-trusted-ca\") pod \"console-operator-58897d9998-lbcxh\" (UID: \"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9\") " pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875878 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9ckp\" (UniqueName: \"kubernetes.io/projected/399d5dbd-8565-4557-b593-f7c1ca2abcf5-kube-api-access-t9ckp\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875929 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-etcd-serving-ca\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.875951 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t9t6\" (UniqueName: \"kubernetes.io/projected/809a0417-e4ae-4f20-b068-90d7ce5f8617-kube-api-access-7t9t6\") pod \"cluster-samples-operator-665b6dd947-fjj99\" (UID: \"809a0417-e4ae-4f20-b068-90d7ce5f8617\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.876029 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffc096e2-e012-44f8-bfad-3d48cc621cc9-auth-proxy-config\") pod \"machine-approver-56656f9798-8zblz\" (UID: \"ffc096e2-e012-44f8-bfad-3d48cc621cc9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.876059 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc096e2-e012-44f8-bfad-3d48cc621cc9-config\") pod \"machine-approver-56656f9798-8zblz\" (UID: \"ffc096e2-e012-44f8-bfad-3d48cc621cc9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.876311 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/745e1125-670f-4e6e-acf0-e1206cf06a8e-etcd-client\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.877280 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7zf\" (UniqueName: \"kubernetes.io/projected/e322e474-b6fd-43ec-a7f4-8680a5b02172-kube-api-access-4w7zf\") pod \"openshift-config-operator-7777fb866f-9tpf2\" (UID: \"e322e474-b6fd-43ec-a7f4-8680a5b02172\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.877307 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ffc096e2-e012-44f8-bfad-3d48cc621cc9-machine-approver-tls\") pod \"machine-approver-56656f9798-8zblz\" (UID: \"ffc096e2-e012-44f8-bfad-3d48cc621cc9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.877325 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/745e1125-670f-4e6e-acf0-e1206cf06a8e-encryption-config\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.877345 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/745e1125-670f-4e6e-acf0-e1206cf06a8e-audit-dir\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.877360 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4312574-3ae8-49f4-a799-e20198b71149-config\") pod \"machine-api-operator-5694c8668f-xxlhx\" (UID: \"f4312574-3ae8-49f4-a799-e20198b71149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.877378 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prcm4\" (UniqueName: \"kubernetes.io/projected/ffc096e2-e012-44f8-bfad-3d48cc621cc9-kube-api-access-prcm4\") pod \"machine-approver-56656f9798-8zblz\" (UID: \"ffc096e2-e012-44f8-bfad-3d48cc621cc9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.877397 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f01642d-b03b-4448-9152-9285d7ca0a6c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7hgjk\" (UID: \"6f01642d-b03b-4448-9152-9285d7ca0a6c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.877611 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j6bxp"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.878311 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j6bxp" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.879070 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.879174 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.879232 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.879253 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.879261 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.885567 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.886795 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.888473 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.890173 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.890362 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.891015 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.892184 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.892615 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.895645 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.898009 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.898709 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.898922 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.899388 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.899591 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.907114 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.910180 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-96ff4"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.910931 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.911262 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-96ff4" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.911286 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6zm9x"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.911910 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.912154 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.913060 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-8hvbs"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.914973 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.918866 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.919258 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.919943 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.920539 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.920773 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.931848 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.933854 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.935144 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6stph"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.935779 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.936368 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.937290 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9brwg"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.938197 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.939306 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.939560 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6stph" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.939739 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.940906 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.941621 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.943762 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zzvkd"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.945011 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.952679 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.955537 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.955748 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.957814 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.958698 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.962638 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.963366 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.963428 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.964197 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.965183 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mwfrc"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.969127 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.970361 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.970501 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.971885 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xxlhx"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.972996 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v8v5f"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.974104 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5sdcl"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.975160 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.975990 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tj982"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.976869 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fgt8t"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.977860 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.977895 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-config\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.977914 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/399d5dbd-8565-4557-b593-f7c1ca2abcf5-serving-cert\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.977928 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-oauth-serving-cert\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.977946 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaea92fe-c8a2-45e7-892e-e7897060eae4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.977963 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sncjx\" (UniqueName: \"kubernetes.io/projected/df08e410-ea02-4bf7-8330-d0530b2c08b5-kube-api-access-sncjx\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.977977 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.977995 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-trusted-ca-bundle\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978013 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/745e1125-670f-4e6e-acf0-e1206cf06a8e-node-pullsecrets\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978028 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drshw\" (UniqueName: \"kubernetes.io/projected/e790bb9a-6948-438a-8d6e-b8a9db1e2aa9-kube-api-access-drshw\") pod \"console-operator-58897d9998-lbcxh\" (UID: \"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9\") " pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978042 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg6d7\" (UniqueName: \"kubernetes.io/projected/eaea92fe-c8a2-45e7-892e-e7897060eae4-kube-api-access-gg6d7\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978058 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df08e410-ea02-4bf7-8330-d0530b2c08b5-serving-cert\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978073 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6cqxg\" (UID: \"27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978107 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e322e474-b6fd-43ec-a7f4-8680a5b02172-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9tpf2\" (UID: \"e322e474-b6fd-43ec-a7f4-8680a5b02172\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978122 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/399d5dbd-8565-4557-b593-f7c1ca2abcf5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978136 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w69k\" (UniqueName: \"kubernetes.io/projected/73cd8533-3450-46e3-89b9-6dd092750ef9-kube-api-access-7w69k\") pod \"downloads-7954f5f757-fgt8t\" (UID: \"73cd8533-3450-46e3-89b9-6dd092750ef9\") " pod="openshift-console/downloads-7954f5f757-fgt8t" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978154 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9439073f-3757-4e0b-959d-fd0c1294ad75-proxy-tls\") pod \"machine-config-controller-84d6567774-6fdjn\" (UID: \"9439073f-3757-4e0b-959d-fd0c1294ad75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978169 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6cqxg\" (UID: \"27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978184 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-service-ca\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978198 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d939f2bb-c256-40c3-96de-f3cf0d53c3b0-config\") pod \"kube-apiserver-operator-766d6c64bb-gv8zn\" (UID: \"d939f2bb-c256-40c3-96de-f3cf0d53c3b0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978213 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/745e1125-670f-4e6e-acf0-e1206cf06a8e-serving-cert\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978227 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-audit\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978242 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/399d5dbd-8565-4557-b593-f7c1ca2abcf5-audit-policies\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978271 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/399d5dbd-8565-4557-b593-f7c1ca2abcf5-encryption-config\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978289 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc955\" (UniqueName: \"kubernetes.io/projected/27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc-kube-api-access-wc955\") pod \"openshift-controller-manager-operator-756b6f6bc6-6cqxg\" (UID: \"27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978305 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-config\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978321 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978340 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f98cf38a-e904-4b11-bd9a-bb558bc603ae-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lvpkq\" (UID: \"f98cf38a-e904-4b11-bd9a-bb558bc603ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978360 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d939f2bb-c256-40c3-96de-f3cf0d53c3b0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gv8zn\" (UID: \"d939f2bb-c256-40c3-96de-f3cf0d53c3b0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978380 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-config\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978395 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097861b9-f639-4e44-a54e-ae798f106ef0-config\") pod \"route-controller-manager-6576b87f9c-p4n49\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978408 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978428 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d08bd8b5-0113-45a9-b115-205e452b1481-metrics-tls\") pod \"dns-operator-744455d44c-gx45l\" (UID: \"d08bd8b5-0113-45a9-b115-205e452b1481\") " pod="openshift-dns-operator/dns-operator-744455d44c-gx45l" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978443 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9ckp\" (UniqueName: \"kubernetes.io/projected/399d5dbd-8565-4557-b593-f7c1ca2abcf5-kube-api-access-t9ckp\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978457 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e790bb9a-6948-438a-8d6e-b8a9db1e2aa9-trusted-ca\") pod \"console-operator-58897d9998-lbcxh\" (UID: \"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9\") " pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978474 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-oauth-config\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978506 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/812c8c26-80fa-4bc3-892c-d101746601c0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7knl5\" (UID: \"812c8c26-80fa-4bc3-892c-d101746601c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978522 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-etcd-serving-ca\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978545 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc096e2-e012-44f8-bfad-3d48cc621cc9-config\") pod \"machine-approver-56656f9798-8zblz\" (UID: \"ffc096e2-e012-44f8-bfad-3d48cc621cc9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978560 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t9t6\" (UniqueName: \"kubernetes.io/projected/809a0417-e4ae-4f20-b068-90d7ce5f8617-kube-api-access-7t9t6\") pod \"cluster-samples-operator-665b6dd947-fjj99\" (UID: \"809a0417-e4ae-4f20-b068-90d7ce5f8617\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978578 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978595 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffc096e2-e012-44f8-bfad-3d48cc621cc9-auth-proxy-config\") pod \"machine-approver-56656f9798-8zblz\" (UID: \"ffc096e2-e012-44f8-bfad-3d48cc621cc9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978611 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-etcd-client\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978626 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s674\" (UniqueName: \"kubernetes.io/projected/920a0317-09dd-43e5-b5a9-11feb6d3b37d-kube-api-access-6s674\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978640 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/745e1125-670f-4e6e-acf0-e1206cf06a8e-etcd-client\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978656 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-audit-policies\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978672 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w7zf\" (UniqueName: \"kubernetes.io/projected/e322e474-b6fd-43ec-a7f4-8680a5b02172-kube-api-access-4w7zf\") pod \"openshift-config-operator-7777fb866f-9tpf2\" (UID: \"e322e474-b6fd-43ec-a7f4-8680a5b02172\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978686 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-config\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978689 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngs9m\" (UniqueName: \"kubernetes.io/projected/9439073f-3757-4e0b-959d-fd0c1294ad75-kube-api-access-ngs9m\") pod \"machine-config-controller-84d6567774-6fdjn\" (UID: \"9439073f-3757-4e0b-959d-fd0c1294ad75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978739 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978765 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh86r\" (UniqueName: \"kubernetes.io/projected/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-kube-api-access-rh86r\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978793 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4312574-3ae8-49f4-a799-e20198b71149-config\") pod \"machine-api-operator-5694c8668f-xxlhx\" (UID: \"f4312574-3ae8-49f4-a799-e20198b71149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978827 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ffc096e2-e012-44f8-bfad-3d48cc621cc9-machine-approver-tls\") pod \"machine-approver-56656f9798-8zblz\" (UID: \"ffc096e2-e012-44f8-bfad-3d48cc621cc9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978846 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812c8c26-80fa-4bc3-892c-d101746601c0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7knl5\" (UID: \"812c8c26-80fa-4bc3-892c-d101746601c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978862 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978881 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/745e1125-670f-4e6e-acf0-e1206cf06a8e-encryption-config\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978902 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/745e1125-670f-4e6e-acf0-e1206cf06a8e-audit-dir\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978923 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f01642d-b03b-4448-9152-9285d7ca0a6c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7hgjk\" (UID: \"6f01642d-b03b-4448-9152-9285d7ca0a6c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978949 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prcm4\" (UniqueName: \"kubernetes.io/projected/ffc096e2-e012-44f8-bfad-3d48cc621cc9-kube-api-access-prcm4\") pod \"machine-approver-56656f9798-8zblz\" (UID: \"ffc096e2-e012-44f8-bfad-3d48cc621cc9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978967 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.978984 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaea92fe-c8a2-45e7-892e-e7897060eae4-config\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979001 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979003 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e322e474-b6fd-43ec-a7f4-8680a5b02172-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9tpf2\" (UID: \"e322e474-b6fd-43ec-a7f4-8680a5b02172\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979017 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-etcd-ca\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979051 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/399d5dbd-8565-4557-b593-f7c1ca2abcf5-etcd-client\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979071 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e790bb9a-6948-438a-8d6e-b8a9db1e2aa9-config\") pod \"console-operator-58897d9998-lbcxh\" (UID: \"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9\") " pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979087 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4312574-3ae8-49f4-a799-e20198b71149-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xxlhx\" (UID: \"f4312574-3ae8-49f4-a799-e20198b71149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979107 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979135 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979168 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28stp\" (UniqueName: \"kubernetes.io/projected/6f01642d-b03b-4448-9152-9285d7ca0a6c-kube-api-access-28stp\") pod \"openshift-apiserver-operator-796bbdcf4f-7hgjk\" (UID: \"6f01642d-b03b-4448-9152-9285d7ca0a6c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979338 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f98cf38a-e904-4b11-bd9a-bb558bc603ae-metrics-tls\") pod \"ingress-operator-5b745b69d9-lvpkq\" (UID: \"f98cf38a-e904-4b11-bd9a-bb558bc603ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979358 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f98cf38a-e904-4b11-bd9a-bb558bc603ae-trusted-ca\") pod \"ingress-operator-5b745b69d9-lvpkq\" (UID: \"f98cf38a-e904-4b11-bd9a-bb558bc603ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979379 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9439073f-3757-4e0b-959d-fd0c1294ad75-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6fdjn\" (UID: \"9439073f-3757-4e0b-959d-fd0c1294ad75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979422 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-serving-cert\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979439 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/399d5dbd-8565-4557-b593-f7c1ca2abcf5-audit-dir\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979455 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkgts\" (UniqueName: \"kubernetes.io/projected/f4312574-3ae8-49f4-a799-e20198b71149-kube-api-access-mkgts\") pod \"machine-api-operator-5694c8668f-xxlhx\" (UID: \"f4312574-3ae8-49f4-a799-e20198b71149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979462 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/399d5dbd-8565-4557-b593-f7c1ca2abcf5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979481 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhnfb\" (UniqueName: \"kubernetes.io/projected/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-kube-api-access-zhnfb\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979509 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv25z\" (UniqueName: \"kubernetes.io/projected/d08bd8b5-0113-45a9-b115-205e452b1481-kube-api-access-cv25z\") pod \"dns-operator-744455d44c-gx45l\" (UID: \"d08bd8b5-0113-45a9-b115-205e452b1481\") " pod="openshift-dns-operator/dns-operator-744455d44c-gx45l" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979530 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/399d5dbd-8565-4557-b593-f7c1ca2abcf5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979545 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e790bb9a-6948-438a-8d6e-b8a9db1e2aa9-serving-cert\") pod \"console-operator-58897d9998-lbcxh\" (UID: \"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9\") " pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979573 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/809a0417-e4ae-4f20-b068-90d7ce5f8617-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fjj99\" (UID: \"809a0417-e4ae-4f20-b068-90d7ce5f8617\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979589 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-client-ca\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979606 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979625 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f4312574-3ae8-49f4-a799-e20198b71149-images\") pod \"machine-api-operator-5694c8668f-xxlhx\" (UID: \"f4312574-3ae8-49f4-a799-e20198b71149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979640 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-etcd-service-ca\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979655 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdnk\" (UniqueName: \"kubernetes.io/projected/f98cf38a-e904-4b11-bd9a-bb558bc603ae-kube-api-access-jvdnk\") pod \"ingress-operator-5b745b69d9-lvpkq\" (UID: \"f98cf38a-e904-4b11-bd9a-bb558bc603ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979686 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e322e474-b6fd-43ec-a7f4-8680a5b02172-serving-cert\") pod \"openshift-config-operator-7777fb866f-9tpf2\" (UID: \"e322e474-b6fd-43ec-a7f4-8680a5b02172\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979703 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fs2n\" (UniqueName: \"kubernetes.io/projected/097861b9-f639-4e44-a54e-ae798f106ef0-kube-api-access-8fs2n\") pod \"route-controller-manager-6576b87f9c-p4n49\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979720 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg5nw\" (UniqueName: \"kubernetes.io/projected/3f7d3e72-29f7-417e-9b42-b13c93e56f46-kube-api-access-mg5nw\") pod \"migrator-59844c95c7-j6bxp\" (UID: \"3f7d3e72-29f7-417e-9b42-b13c93e56f46\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j6bxp" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979741 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaea92fe-c8a2-45e7-892e-e7897060eae4-service-ca-bundle\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979756 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979775 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-config\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979808 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaea92fe-c8a2-45e7-892e-e7897060eae4-serving-cert\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979848 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/097861b9-f639-4e44-a54e-ae798f106ef0-serving-cert\") pod \"route-controller-manager-6576b87f9c-p4n49\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979865 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d939f2bb-c256-40c3-96de-f3cf0d53c3b0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gv8zn\" (UID: \"d939f2bb-c256-40c3-96de-f3cf0d53c3b0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979886 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812c8c26-80fa-4bc3-892c-d101746601c0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7knl5\" (UID: \"812c8c26-80fa-4bc3-892c-d101746601c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979903 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-audit-dir\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979920 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhv2t\" (UniqueName: \"kubernetes.io/projected/745e1125-670f-4e6e-acf0-e1206cf06a8e-kube-api-access-vhv2t\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979935 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f01642d-b03b-4448-9152-9285d7ca0a6c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7hgjk\" (UID: \"6f01642d-b03b-4448-9152-9285d7ca0a6c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979950 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/097861b9-f639-4e44-a54e-ae798f106ef0-client-ca\") pod \"route-controller-manager-6576b87f9c-p4n49\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979974 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-serving-cert\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.979997 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-image-import-ca\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.980026 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.980063 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-745nn"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.980757 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-image-import-ca\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.980867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/745e1125-670f-4e6e-acf0-e1206cf06a8e-node-pullsecrets\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.981154 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-audit\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.981526 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4312574-3ae8-49f4-a799-e20198b71149-config\") pod \"machine-api-operator-5694c8668f-xxlhx\" (UID: \"f4312574-3ae8-49f4-a799-e20198b71149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.981664 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/399d5dbd-8565-4557-b593-f7c1ca2abcf5-audit-policies\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.982556 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaea92fe-c8a2-45e7-892e-e7897060eae4-config\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.982601 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/745e1125-670f-4e6e-acf0-e1206cf06a8e-audit-dir\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.982651 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/399d5dbd-8565-4557-b593-f7c1ca2abcf5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.982711 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaea92fe-c8a2-45e7-892e-e7897060eae4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.982764 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-etcd-serving-ca\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.983076 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f01642d-b03b-4448-9152-9285d7ca0a6c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7hgjk\" (UID: \"6f01642d-b03b-4448-9152-9285d7ca0a6c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.983278 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.983516 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc096e2-e012-44f8-bfad-3d48cc621cc9-config\") pod \"machine-approver-56656f9798-8zblz\" (UID: \"ffc096e2-e012-44f8-bfad-3d48cc621cc9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.983790 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/745e1125-670f-4e6e-acf0-e1206cf06a8e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.983838 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-hwbdt"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.983897 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/399d5dbd-8565-4557-b593-f7c1ca2abcf5-audit-dir\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.984242 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-config\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.984356 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f4312574-3ae8-49f4-a799-e20198b71149-images\") pod \"machine-api-operator-5694c8668f-xxlhx\" (UID: \"f4312574-3ae8-49f4-a799-e20198b71149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.984707 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097861b9-f639-4e44-a54e-ae798f106ef0-config\") pod \"route-controller-manager-6576b87f9c-p4n49\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.984753 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-client-ca\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.984847 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.984901 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaea92fe-c8a2-45e7-892e-e7897060eae4-service-ca-bundle\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.985009 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hwbdt" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.985183 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/745e1125-670f-4e6e-acf0-e1206cf06a8e-etcd-client\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.985321 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/745e1125-670f-4e6e-acf0-e1206cf06a8e-encryption-config\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.985575 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/097861b9-f639-4e44-a54e-ae798f106ef0-client-ca\") pod \"route-controller-manager-6576b87f9c-p4n49\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.985799 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffc096e2-e012-44f8-bfad-3d48cc621cc9-auth-proxy-config\") pod \"machine-approver-56656f9798-8zblz\" (UID: \"ffc096e2-e012-44f8-bfad-3d48cc621cc9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.985923 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ffc096e2-e012-44f8-bfad-3d48cc621cc9-machine-approver-tls\") pod \"machine-approver-56656f9798-8zblz\" (UID: \"ffc096e2-e012-44f8-bfad-3d48cc621cc9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.986093 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/745e1125-670f-4e6e-acf0-e1206cf06a8e-serving-cert\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.986138 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j6bxp"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.986906 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e790bb9a-6948-438a-8d6e-b8a9db1e2aa9-trusted-ca\") pod \"console-operator-58897d9998-lbcxh\" (UID: \"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9\") " pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.987311 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nxw22"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.987349 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaea92fe-c8a2-45e7-892e-e7897060eae4-serving-cert\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.987539 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e790bb9a-6948-438a-8d6e-b8a9db1e2aa9-config\") pod \"console-operator-58897d9998-lbcxh\" (UID: \"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9\") " pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.987834 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4312574-3ae8-49f4-a799-e20198b71149-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xxlhx\" (UID: \"f4312574-3ae8-49f4-a799-e20198b71149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.988051 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/399d5dbd-8565-4557-b593-f7c1ca2abcf5-serving-cert\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.988080 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/809a0417-e4ae-4f20-b068-90d7ce5f8617-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fjj99\" (UID: \"809a0417-e4ae-4f20-b068-90d7ce5f8617\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.988244 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df08e410-ea02-4bf7-8330-d0530b2c08b5-serving-cert\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.988394 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e322e474-b6fd-43ec-a7f4-8680a5b02172-serving-cert\") pod \"openshift-config-operator-7777fb866f-9tpf2\" (UID: \"e322e474-b6fd-43ec-a7f4-8680a5b02172\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.988428 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.988843 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/399d5dbd-8565-4557-b593-f7c1ca2abcf5-etcd-client\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.989709 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/399d5dbd-8565-4557-b593-f7c1ca2abcf5-encryption-config\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.989785 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/097861b9-f639-4e44-a54e-ae798f106ef0-serving-cert\") pod \"route-controller-manager-6576b87f9c-p4n49\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.990121 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.990222 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.990888 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f01642d-b03b-4448-9152-9285d7ca0a6c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7hgjk\" (UID: \"6f01642d-b03b-4448-9152-9285d7ca0a6c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.991384 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gx45l"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.992702 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e790bb9a-6948-438a-8d6e-b8a9db1e2aa9-serving-cert\") pod \"console-operator-58897d9998-lbcxh\" (UID: \"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9\") " pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.992756 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.994586 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zzvkd"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.997614 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6stph"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.998567 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lbcxh"] Nov 24 17:51:39 crc kubenswrapper[4768]: I1124 17:51:39.999315 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.001860 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9brwg"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.003594 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.005188 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.007406 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.009908 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.010136 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.011010 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6zm9x"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.011771 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.011977 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-96ff4"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.013085 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.014193 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fph7m"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.015357 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qd5vx"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.015906 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fph7m" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.016700 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.025916 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.029612 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.030972 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.031596 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.033924 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fph7m"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.036708 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qd5vx"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.037720 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qk7vd"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.038456 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qk7vd" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.038754 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qk7vd"] Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.050197 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.070395 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080422 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6cqxg\" (UID: \"27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080457 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b915353f-fcb8-4d2c-841f-a2091f2c7d96-stats-auth\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080474 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b915353f-fcb8-4d2c-841f-a2091f2c7d96-metrics-certs\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080508 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6cqxg\" (UID: \"27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080532 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d939f2bb-c256-40c3-96de-f3cf0d53c3b0-config\") pod \"kube-apiserver-operator-766d6c64bb-gv8zn\" (UID: \"d939f2bb-c256-40c3-96de-f3cf0d53c3b0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080661 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkfxk\" (UniqueName: \"kubernetes.io/projected/5b91c837-cd56-4b9a-b69e-7bc008877eb9-kube-api-access-pkfxk\") pod \"cluster-image-registry-operator-dc59b4c8b-w74jp\" (UID: \"5b91c837-cd56-4b9a-b69e-7bc008877eb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080686 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/798036f8-c88a-4293-85f7-59946faf2a71-srv-cert\") pod \"olm-operator-6b444d44fb-nd7rd\" (UID: \"798036f8-c88a-4293-85f7-59946faf2a71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080702 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpwz7\" (UniqueName: \"kubernetes.io/projected/b915353f-fcb8-4d2c-841f-a2091f2c7d96-kube-api-access-dpwz7\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080718 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d939f2bb-c256-40c3-96de-f3cf0d53c3b0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gv8zn\" (UID: \"d939f2bb-c256-40c3-96de-f3cf0d53c3b0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080739 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080755 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f98cf38a-e904-4b11-bd9a-bb558bc603ae-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lvpkq\" (UID: \"f98cf38a-e904-4b11-bd9a-bb558bc603ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080774 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d08bd8b5-0113-45a9-b115-205e452b1481-metrics-tls\") pod \"dns-operator-744455d44c-gx45l\" (UID: \"d08bd8b5-0113-45a9-b115-205e452b1481\") " pod="openshift-dns-operator/dns-operator-744455d44c-gx45l" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080800 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b91c837-cd56-4b9a-b69e-7bc008877eb9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w74jp\" (UID: \"5b91c837-cd56-4b9a-b69e-7bc008877eb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080824 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/812c8c26-80fa-4bc3-892c-d101746601c0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7knl5\" (UID: \"812c8c26-80fa-4bc3-892c-d101746601c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080846 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-config\") pod \"service-ca-operator-777779d784-9brwg\" (UID: \"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080864 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-etcd-client\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080879 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s674\" (UniqueName: \"kubernetes.io/projected/920a0317-09dd-43e5-b5a9-11feb6d3b37d-kube-api-access-6s674\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080898 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngs9m\" (UniqueName: \"kubernetes.io/projected/9439073f-3757-4e0b-959d-fd0c1294ad75-kube-api-access-ngs9m\") pod \"machine-config-controller-84d6567774-6fdjn\" (UID: \"9439073f-3757-4e0b-959d-fd0c1294ad75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080914 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080932 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b6167fc-ef32-4514-aa36-75ac504c9393-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2blh5\" (UID: \"4b6167fc-ef32-4514-aa36-75ac504c9393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080963 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-etcd-ca\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.080983 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ldkx\" (UniqueName: \"kubernetes.io/projected/c39e586f-224c-4428-9114-1accf92dc1d4-kube-api-access-4ldkx\") pod \"multus-admission-controller-857f4d67dd-96ff4\" (UID: \"c39e586f-224c-4428-9114-1accf92dc1d4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-96ff4" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081005 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-serving-cert\") pod \"service-ca-operator-777779d784-9brwg\" (UID: \"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081022 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081043 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c39e586f-224c-4428-9114-1accf92dc1d4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-96ff4\" (UID: \"c39e586f-224c-4428-9114-1accf92dc1d4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-96ff4" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081068 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9439073f-3757-4e0b-959d-fd0c1294ad75-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6fdjn\" (UID: \"9439073f-3757-4e0b-959d-fd0c1294ad75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081094 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-serving-cert\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081118 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/622d16ca-1d8c-49e7-8ad7-c7b33b9003f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrct4\" (UID: \"622d16ca-1d8c-49e7-8ad7-c7b33b9003f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081141 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnw8b\" (UniqueName: \"kubernetes.io/projected/9b8d6985-79fe-4be9-a7e3-5c762214d678-kube-api-access-xnw8b\") pod \"marketplace-operator-79b997595-6zm9x\" (UID: \"9b8d6985-79fe-4be9-a7e3-5c762214d678\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081166 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv25z\" (UniqueName: \"kubernetes.io/projected/d08bd8b5-0113-45a9-b115-205e452b1481-kube-api-access-cv25z\") pod \"dns-operator-744455d44c-gx45l\" (UID: \"d08bd8b5-0113-45a9-b115-205e452b1481\") " pod="openshift-dns-operator/dns-operator-744455d44c-gx45l" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081209 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081242 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvdnk\" (UniqueName: \"kubernetes.io/projected/f98cf38a-e904-4b11-bd9a-bb558bc603ae-kube-api-access-jvdnk\") pod \"ingress-operator-5b745b69d9-lvpkq\" (UID: \"f98cf38a-e904-4b11-bd9a-bb558bc603ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081264 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b8d6985-79fe-4be9-a7e3-5c762214d678-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6zm9x\" (UID: \"9b8d6985-79fe-4be9-a7e3-5c762214d678\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081287 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-config\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081307 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b91c837-cd56-4b9a-b69e-7bc008877eb9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w74jp\" (UID: \"5b91c837-cd56-4b9a-b69e-7bc008877eb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081313 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6cqxg\" (UID: \"27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081328 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d939f2bb-c256-40c3-96de-f3cf0d53c3b0-config\") pod \"kube-apiserver-operator-766d6c64bb-gv8zn\" (UID: \"d939f2bb-c256-40c3-96de-f3cf0d53c3b0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081329 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812c8c26-80fa-4bc3-892c-d101746601c0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7knl5\" (UID: \"812c8c26-80fa-4bc3-892c-d101746601c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.081453 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b915353f-fcb8-4d2c-841f-a2091f2c7d96-service-ca-bundle\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.082084 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-serving-cert\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.082126 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.082168 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-oauth-serving-cert\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.082233 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-trusted-ca-bundle\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.082942 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.083266 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d939f2bb-c256-40c3-96de-f3cf0d53c3b0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gv8zn\" (UID: \"d939f2bb-c256-40c3-96de-f3cf0d53c3b0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.083443 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9439073f-3757-4e0b-959d-fd0c1294ad75-proxy-tls\") pod \"machine-config-controller-84d6567774-6fdjn\" (UID: \"9439073f-3757-4e0b-959d-fd0c1294ad75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.083479 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-service-ca\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.083529 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/798036f8-c88a-4293-85f7-59946faf2a71-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nd7rd\" (UID: \"798036f8-c88a-4293-85f7-59946faf2a71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.083677 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc955\" (UniqueName: \"kubernetes.io/projected/27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc-kube-api-access-wc955\") pod \"openshift-controller-manager-operator-756b6f6bc6-6cqxg\" (UID: \"27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.083709 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmf8r\" (UniqueName: \"kubernetes.io/projected/4b6167fc-ef32-4514-aa36-75ac504c9393-kube-api-access-kmf8r\") pod \"kube-storage-version-migrator-operator-b67b599dd-2blh5\" (UID: \"4b6167fc-ef32-4514-aa36-75ac504c9393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.083736 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.083758 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-config\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.083781 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b8d6985-79fe-4be9-a7e3-5c762214d678-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6zm9x\" (UID: \"9b8d6985-79fe-4be9-a7e3-5c762214d678\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.083945 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-trusted-ca-bundle\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.084307 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-service-ca\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.084536 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.084687 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.085081 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9439073f-3757-4e0b-959d-fd0c1294ad75-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6fdjn\" (UID: \"9439073f-3757-4e0b-959d-fd0c1294ad75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.085663 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.085730 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-oauth-config\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.085801 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-oauth-serving-cert\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.085817 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b91c837-cd56-4b9a-b69e-7bc008877eb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w74jp\" (UID: \"5b91c837-cd56-4b9a-b69e-7bc008877eb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.085850 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmg9z\" (UniqueName: \"kubernetes.io/projected/622d16ca-1d8c-49e7-8ad7-c7b33b9003f2-kube-api-access-lmg9z\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrct4\" (UID: \"622d16ca-1d8c-49e7-8ad7-c7b33b9003f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.085968 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.086028 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-audit-policies\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.086154 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-config\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.086170 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6167fc-ef32-4514-aa36-75ac504c9393-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2blh5\" (UID: \"4b6167fc-ef32-4514-aa36-75ac504c9393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.086370 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.086377 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh86r\" (UniqueName: \"kubernetes.io/projected/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-kube-api-access-rh86r\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.086449 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6cqxg\" (UID: \"27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.086546 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812c8c26-80fa-4bc3-892c-d101746601c0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7knl5\" (UID: \"812c8c26-80fa-4bc3-892c-d101746601c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.086602 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.086705 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-audit-policies\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.086877 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.086940 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qmdc\" (UniqueName: \"kubernetes.io/projected/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-kube-api-access-7qmdc\") pod \"service-ca-operator-777779d784-9brwg\" (UID: \"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.087117 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f98cf38a-e904-4b11-bd9a-bb558bc603ae-metrics-tls\") pod \"ingress-operator-5b745b69d9-lvpkq\" (UID: \"f98cf38a-e904-4b11-bd9a-bb558bc603ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.087151 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f98cf38a-e904-4b11-bd9a-bb558bc603ae-trusted-ca\") pod \"ingress-operator-5b745b69d9-lvpkq\" (UID: \"f98cf38a-e904-4b11-bd9a-bb558bc603ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.087178 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhnfb\" (UniqueName: \"kubernetes.io/projected/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-kube-api-access-zhnfb\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.087218 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-etcd-service-ca\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.087243 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b915353f-fcb8-4d2c-841f-a2091f2c7d96-default-certificate\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.087274 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg5nw\" (UniqueName: \"kubernetes.io/projected/3f7d3e72-29f7-417e-9b42-b13c93e56f46-kube-api-access-mg5nw\") pod \"migrator-59844c95c7-j6bxp\" (UID: \"3f7d3e72-29f7-417e-9b42-b13c93e56f46\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j6bxp" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.087310 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.087336 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d939f2bb-c256-40c3-96de-f3cf0d53c3b0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gv8zn\" (UID: \"d939f2bb-c256-40c3-96de-f3cf0d53c3b0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.087359 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-audit-dir\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.087401 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mk2n\" (UniqueName: \"kubernetes.io/projected/798036f8-c88a-4293-85f7-59946faf2a71-kube-api-access-2mk2n\") pod \"olm-operator-6b444d44fb-nd7rd\" (UID: \"798036f8-c88a-4293-85f7-59946faf2a71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.087429 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.088121 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-serving-cert\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.088174 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-audit-dir\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.088215 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d08bd8b5-0113-45a9-b115-205e452b1481-metrics-tls\") pod \"dns-operator-744455d44c-gx45l\" (UID: \"d08bd8b5-0113-45a9-b115-205e452b1481\") " pod="openshift-dns-operator/dns-operator-744455d44c-gx45l" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.088254 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.088824 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-serving-cert\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.089062 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-etcd-service-ca\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.089234 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.089531 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-etcd-client\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.090332 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.090870 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-oauth-config\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.091115 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.091364 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.091690 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.093558 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.094868 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-config\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.110306 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.114544 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-etcd-ca\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.150935 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.170919 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.188333 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qmdc\" (UniqueName: \"kubernetes.io/projected/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-kube-api-access-7qmdc\") pod \"service-ca-operator-777779d784-9brwg\" (UID: \"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.188427 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b915353f-fcb8-4d2c-841f-a2091f2c7d96-default-certificate\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.188519 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mk2n\" (UniqueName: \"kubernetes.io/projected/798036f8-c88a-4293-85f7-59946faf2a71-kube-api-access-2mk2n\") pod \"olm-operator-6b444d44fb-nd7rd\" (UID: \"798036f8-c88a-4293-85f7-59946faf2a71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.188560 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b915353f-fcb8-4d2c-841f-a2091f2c7d96-stats-auth\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.188593 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b915353f-fcb8-4d2c-841f-a2091f2c7d96-metrics-certs\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.188639 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkfxk\" (UniqueName: \"kubernetes.io/projected/5b91c837-cd56-4b9a-b69e-7bc008877eb9-kube-api-access-pkfxk\") pod \"cluster-image-registry-operator-dc59b4c8b-w74jp\" (UID: \"5b91c837-cd56-4b9a-b69e-7bc008877eb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.188674 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/798036f8-c88a-4293-85f7-59946faf2a71-srv-cert\") pod \"olm-operator-6b444d44fb-nd7rd\" (UID: \"798036f8-c88a-4293-85f7-59946faf2a71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.188707 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpwz7\" (UniqueName: \"kubernetes.io/projected/b915353f-fcb8-4d2c-841f-a2091f2c7d96-kube-api-access-dpwz7\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.188766 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b91c837-cd56-4b9a-b69e-7bc008877eb9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w74jp\" (UID: \"5b91c837-cd56-4b9a-b69e-7bc008877eb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.188835 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-config\") pod \"service-ca-operator-777779d784-9brwg\" (UID: \"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.188892 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b6167fc-ef32-4514-aa36-75ac504c9393-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2blh5\" (UID: \"4b6167fc-ef32-4514-aa36-75ac504c9393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.188964 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ldkx\" (UniqueName: \"kubernetes.io/projected/c39e586f-224c-4428-9114-1accf92dc1d4-kube-api-access-4ldkx\") pod \"multus-admission-controller-857f4d67dd-96ff4\" (UID: \"c39e586f-224c-4428-9114-1accf92dc1d4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-96ff4" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189004 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-serving-cert\") pod \"service-ca-operator-777779d784-9brwg\" (UID: \"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189034 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c39e586f-224c-4428-9114-1accf92dc1d4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-96ff4\" (UID: \"c39e586f-224c-4428-9114-1accf92dc1d4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-96ff4" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189070 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/622d16ca-1d8c-49e7-8ad7-c7b33b9003f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrct4\" (UID: \"622d16ca-1d8c-49e7-8ad7-c7b33b9003f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189108 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnw8b\" (UniqueName: \"kubernetes.io/projected/9b8d6985-79fe-4be9-a7e3-5c762214d678-kube-api-access-xnw8b\") pod \"marketplace-operator-79b997595-6zm9x\" (UID: \"9b8d6985-79fe-4be9-a7e3-5c762214d678\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189194 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b8d6985-79fe-4be9-a7e3-5c762214d678-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6zm9x\" (UID: \"9b8d6985-79fe-4be9-a7e3-5c762214d678\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189298 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b91c837-cd56-4b9a-b69e-7bc008877eb9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w74jp\" (UID: \"5b91c837-cd56-4b9a-b69e-7bc008877eb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189355 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b915353f-fcb8-4d2c-841f-a2091f2c7d96-service-ca-bundle\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189623 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/798036f8-c88a-4293-85f7-59946faf2a71-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nd7rd\" (UID: \"798036f8-c88a-4293-85f7-59946faf2a71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189675 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmf8r\" (UniqueName: \"kubernetes.io/projected/4b6167fc-ef32-4514-aa36-75ac504c9393-kube-api-access-kmf8r\") pod \"kube-storage-version-migrator-operator-b67b599dd-2blh5\" (UID: \"4b6167fc-ef32-4514-aa36-75ac504c9393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189701 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b8d6985-79fe-4be9-a7e3-5c762214d678-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6zm9x\" (UID: \"9b8d6985-79fe-4be9-a7e3-5c762214d678\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189730 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b91c837-cd56-4b9a-b69e-7bc008877eb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w74jp\" (UID: \"5b91c837-cd56-4b9a-b69e-7bc008877eb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189776 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmg9z\" (UniqueName: \"kubernetes.io/projected/622d16ca-1d8c-49e7-8ad7-c7b33b9003f2-kube-api-access-lmg9z\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrct4\" (UID: \"622d16ca-1d8c-49e7-8ad7-c7b33b9003f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.189811 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6167fc-ef32-4514-aa36-75ac504c9393-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2blh5\" (UID: \"4b6167fc-ef32-4514-aa36-75ac504c9393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.190665 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.210639 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.216991 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9439073f-3757-4e0b-959d-fd0c1294ad75-proxy-tls\") pod \"machine-config-controller-84d6567774-6fdjn\" (UID: \"9439073f-3757-4e0b-959d-fd0c1294ad75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.230693 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.250799 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.270525 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.291124 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.301818 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f98cf38a-e904-4b11-bd9a-bb558bc603ae-metrics-tls\") pod \"ingress-operator-5b745b69d9-lvpkq\" (UID: \"f98cf38a-e904-4b11-bd9a-bb558bc603ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.316784 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.318704 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f98cf38a-e904-4b11-bd9a-bb558bc603ae-trusted-ca\") pod \"ingress-operator-5b745b69d9-lvpkq\" (UID: \"f98cf38a-e904-4b11-bd9a-bb558bc603ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.331901 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.350448 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.372080 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.380187 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812c8c26-80fa-4bc3-892c-d101746601c0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7knl5\" (UID: \"812c8c26-80fa-4bc3-892c-d101746601c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.391043 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.392578 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812c8c26-80fa-4bc3-892c-d101746601c0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7knl5\" (UID: \"812c8c26-80fa-4bc3-892c-d101746601c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.410971 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.430692 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.443409 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c39e586f-224c-4428-9114-1accf92dc1d4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-96ff4\" (UID: \"c39e586f-224c-4428-9114-1accf92dc1d4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-96ff4" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.450500 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.486958 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.490794 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b91c837-cd56-4b9a-b69e-7bc008877eb9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w74jp\" (UID: \"5b91c837-cd56-4b9a-b69e-7bc008877eb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.491311 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.511549 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.524404 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b91c837-cd56-4b9a-b69e-7bc008877eb9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w74jp\" (UID: \"5b91c837-cd56-4b9a-b69e-7bc008877eb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.531303 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.551948 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.565950 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b8d6985-79fe-4be9-a7e3-5c762214d678-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6zm9x\" (UID: \"9b8d6985-79fe-4be9-a7e3-5c762214d678\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.571230 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.597440 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.600798 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b8d6985-79fe-4be9-a7e3-5c762214d678-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6zm9x\" (UID: \"9b8d6985-79fe-4be9-a7e3-5c762214d678\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.611739 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.630556 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.642663 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b915353f-fcb8-4d2c-841f-a2091f2c7d96-stats-auth\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.651742 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.670978 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.682957 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b915353f-fcb8-4d2c-841f-a2091f2c7d96-default-certificate\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.690250 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.700224 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b915353f-fcb8-4d2c-841f-a2091f2c7d96-service-ca-bundle\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.711309 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.723242 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b915353f-fcb8-4d2c-841f-a2091f2c7d96-metrics-certs\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.732122 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.752013 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.771449 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.790871 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.803007 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/622d16ca-1d8c-49e7-8ad7-c7b33b9003f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrct4\" (UID: \"622d16ca-1d8c-49e7-8ad7-c7b33b9003f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.812196 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.831769 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.841921 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b6167fc-ef32-4514-aa36-75ac504c9393-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2blh5\" (UID: \"4b6167fc-ef32-4514-aa36-75ac504c9393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.850841 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.861567 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6167fc-ef32-4514-aa36-75ac504c9393-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2blh5\" (UID: \"4b6167fc-ef32-4514-aa36-75ac504c9393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.871359 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.890960 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.911050 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.931014 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.949579 4768 request.go:700] Waited for 1.013280347s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0 Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.951331 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.970831 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 17:51:40 crc kubenswrapper[4768]: I1124 17:51:40.990193 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.003527 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/798036f8-c88a-4293-85f7-59946faf2a71-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nd7rd\" (UID: \"798036f8-c88a-4293-85f7-59946faf2a71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.011523 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.031328 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.050847 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.070540 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.091957 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.111532 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.131589 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.150381 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.171562 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 17:51:41 crc kubenswrapper[4768]: E1124 17:51:41.189617 4768 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 24 17:51:41 crc kubenswrapper[4768]: E1124 17:51:41.189706 4768 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Nov 24 17:51:41 crc kubenswrapper[4768]: E1124 17:51:41.189768 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-serving-cert podName:41284f0b-a93d-49a3-bfbc-1f0aeae13cdc nodeName:}" failed. No retries permitted until 2025-11-24 17:51:41.689719318 +0000 UTC m=+140.550301145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-serving-cert") pod "service-ca-operator-777779d784-9brwg" (UID: "41284f0b-a93d-49a3-bfbc-1f0aeae13cdc") : failed to sync secret cache: timed out waiting for the condition Nov 24 17:51:41 crc kubenswrapper[4768]: E1124 17:51:41.189814 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-config podName:41284f0b-a93d-49a3-bfbc-1f0aeae13cdc nodeName:}" failed. No retries permitted until 2025-11-24 17:51:41.68979485 +0000 UTC m=+140.550376677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-config") pod "service-ca-operator-777779d784-9brwg" (UID: "41284f0b-a93d-49a3-bfbc-1f0aeae13cdc") : failed to sync configmap cache: timed out waiting for the condition Nov 24 17:51:41 crc kubenswrapper[4768]: E1124 17:51:41.189639 4768 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 24 17:51:41 crc kubenswrapper[4768]: E1124 17:51:41.190314 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/798036f8-c88a-4293-85f7-59946faf2a71-srv-cert podName:798036f8-c88a-4293-85f7-59946faf2a71 nodeName:}" failed. No retries permitted until 2025-11-24 17:51:41.690282863 +0000 UTC m=+140.550864670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/798036f8-c88a-4293-85f7-59946faf2a71-srv-cert") pod "olm-operator-6b444d44fb-nd7rd" (UID: "798036f8-c88a-4293-85f7-59946faf2a71") : failed to sync secret cache: timed out waiting for the condition Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.191182 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.210107 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.230548 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.251013 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.271182 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.291804 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.311619 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.331299 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.351767 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.372620 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.408306 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.410456 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.432605 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.451585 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.471822 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.491523 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.531147 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w69k\" (UniqueName: \"kubernetes.io/projected/73cd8533-3450-46e3-89b9-6dd092750ef9-kube-api-access-7w69k\") pod \"downloads-7954f5f757-fgt8t\" (UID: \"73cd8533-3450-46e3-89b9-6dd092750ef9\") " pod="openshift-console/downloads-7954f5f757-fgt8t" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.552400 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sncjx\" (UniqueName: \"kubernetes.io/projected/df08e410-ea02-4bf7-8330-d0530b2c08b5-kube-api-access-sncjx\") pod \"controller-manager-879f6c89f-mwfrc\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.569179 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drshw\" (UniqueName: \"kubernetes.io/projected/e790bb9a-6948-438a-8d6e-b8a9db1e2aa9-kube-api-access-drshw\") pod \"console-operator-58897d9998-lbcxh\" (UID: \"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9\") " pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.587872 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.595087 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg6d7\" (UniqueName: \"kubernetes.io/projected/eaea92fe-c8a2-45e7-892e-e7897060eae4-kube-api-access-gg6d7\") pod \"authentication-operator-69f744f599-v8v5f\" (UID: \"eaea92fe-c8a2-45e7-892e-e7897060eae4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.607533 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28stp\" (UniqueName: \"kubernetes.io/projected/6f01642d-b03b-4448-9152-9285d7ca0a6c-kube-api-access-28stp\") pod \"openshift-apiserver-operator-796bbdcf4f-7hgjk\" (UID: \"6f01642d-b03b-4448-9152-9285d7ca0a6c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.626479 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.631368 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prcm4\" (UniqueName: \"kubernetes.io/projected/ffc096e2-e012-44f8-bfad-3d48cc621cc9-kube-api-access-prcm4\") pod \"machine-approver-56656f9798-8zblz\" (UID: \"ffc096e2-e012-44f8-bfad-3d48cc621cc9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.649111 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t9t6\" (UniqueName: \"kubernetes.io/projected/809a0417-e4ae-4f20-b068-90d7ce5f8617-kube-api-access-7t9t6\") pod \"cluster-samples-operator-665b6dd947-fjj99\" (UID: \"809a0417-e4ae-4f20-b068-90d7ce5f8617\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.686304 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fs2n\" (UniqueName: \"kubernetes.io/projected/097861b9-f639-4e44-a54e-ae798f106ef0-kube-api-access-8fs2n\") pod \"route-controller-manager-6576b87f9c-p4n49\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.688900 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.697826 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkgts\" (UniqueName: \"kubernetes.io/projected/f4312574-3ae8-49f4-a799-e20198b71149-kube-api-access-mkgts\") pod \"machine-api-operator-5694c8668f-xxlhx\" (UID: \"f4312574-3ae8-49f4-a799-e20198b71149\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.703838 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9ckp\" (UniqueName: \"kubernetes.io/projected/399d5dbd-8565-4557-b593-f7c1ca2abcf5-kube-api-access-t9ckp\") pod \"apiserver-7bbb656c7d-2kv5d\" (UID: \"399d5dbd-8565-4557-b593-f7c1ca2abcf5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.709069 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.711656 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/798036f8-c88a-4293-85f7-59946faf2a71-srv-cert\") pod \"olm-operator-6b444d44fb-nd7rd\" (UID: \"798036f8-c88a-4293-85f7-59946faf2a71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.711700 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-config\") pod \"service-ca-operator-777779d784-9brwg\" (UID: \"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.711735 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-serving-cert\") pod \"service-ca-operator-777779d784-9brwg\" (UID: \"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.712953 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-config\") pod \"service-ca-operator-777779d784-9brwg\" (UID: \"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.715564 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-serving-cert\") pod \"service-ca-operator-777779d784-9brwg\" (UID: \"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.718653 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.719295 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/798036f8-c88a-4293-85f7-59946faf2a71-srv-cert\") pod \"olm-operator-6b444d44fb-nd7rd\" (UID: \"798036f8-c88a-4293-85f7-59946faf2a71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.725672 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhv2t\" (UniqueName: \"kubernetes.io/projected/745e1125-670f-4e6e-acf0-e1206cf06a8e-kube-api-access-vhv2t\") pod \"apiserver-76f77b778f-5sdcl\" (UID: \"745e1125-670f-4e6e-acf0-e1206cf06a8e\") " pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.727623 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.747049 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w7zf\" (UniqueName: \"kubernetes.io/projected/e322e474-b6fd-43ec-a7f4-8680a5b02172-kube-api-access-4w7zf\") pod \"openshift-config-operator-7777fb866f-9tpf2\" (UID: \"e322e474-b6fd-43ec-a7f4-8680a5b02172\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.752337 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.757286 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.767300 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fgt8t" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.770711 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.775791 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.791546 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.831276 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.852148 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.870857 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mwfrc"] Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.871091 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.891513 4768 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.901755 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:41 crc kubenswrapper[4768]: W1124 17:51:41.901887 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf08e410_ea02_4bf7_8330_d0530b2c08b5.slice/crio-4619ce1363586919481cc3d54159b704e17286c31c7b2626e95b51ca9959a3fe WatchSource:0}: Error finding container 4619ce1363586919481cc3d54159b704e17286c31c7b2626e95b51ca9959a3fe: Status 404 returned error can't find the container with id 4619ce1363586919481cc3d54159b704e17286c31c7b2626e95b51ca9959a3fe Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.910315 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.930960 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.945156 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.951740 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.968545 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.969073 4768 request.go:700] Waited for 1.930415044s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.970687 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.978749 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:41 crc kubenswrapper[4768]: I1124 17:51:41.992373 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.011552 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.053290 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f98cf38a-e904-4b11-bd9a-bb558bc603ae-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lvpkq\" (UID: \"f98cf38a-e904-4b11-bd9a-bb558bc603ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.075175 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngs9m\" (UniqueName: \"kubernetes.io/projected/9439073f-3757-4e0b-959d-fd0c1294ad75-kube-api-access-ngs9m\") pod \"machine-config-controller-84d6567774-6fdjn\" (UID: \"9439073f-3757-4e0b-959d-fd0c1294ad75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.096354 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/812c8c26-80fa-4bc3-892c-d101746601c0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7knl5\" (UID: \"812c8c26-80fa-4bc3-892c-d101746601c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.110237 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s674\" (UniqueName: \"kubernetes.io/projected/920a0317-09dd-43e5-b5a9-11feb6d3b37d-kube-api-access-6s674\") pod \"console-f9d7485db-tj982\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.127965 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv25z\" (UniqueName: \"kubernetes.io/projected/d08bd8b5-0113-45a9-b115-205e452b1481-kube-api-access-cv25z\") pod \"dns-operator-744455d44c-gx45l\" (UID: \"d08bd8b5-0113-45a9-b115-205e452b1481\") " pod="openshift-dns-operator/dns-operator-744455d44c-gx45l" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.136389 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.152352 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc955\" (UniqueName: \"kubernetes.io/projected/27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc-kube-api-access-wc955\") pod \"openshift-controller-manager-operator-756b6f6bc6-6cqxg\" (UID: \"27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.161129 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.163890 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.164392 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.166961 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvdnk\" (UniqueName: \"kubernetes.io/projected/f98cf38a-e904-4b11-bd9a-bb558bc603ae-kube-api-access-jvdnk\") pod \"ingress-operator-5b745b69d9-lvpkq\" (UID: \"f98cf38a-e904-4b11-bd9a-bb558bc603ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.171687 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.178251 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.187038 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xxlhx"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.189646 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh86r\" (UniqueName: \"kubernetes.io/projected/7e752bf7-ed78-42c7-a76b-dcd9ca447ab5-kube-api-access-rh86r\") pod \"etcd-operator-b45778765-nxw22\" (UID: \"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:42 crc kubenswrapper[4768]: W1124 17:51:42.199816 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4312574_3ae8_49f4_a799_e20198b71149.slice/crio-f7b1f544da866450c5708085d28ef4845c873f981fdb09fb7d267c6090ca091e WatchSource:0}: Error finding container f7b1f544da866450c5708085d28ef4845c873f981fdb09fb7d267c6090ca091e: Status 404 returned error can't find the container with id f7b1f544da866450c5708085d28ef4845c873f981fdb09fb7d267c6090ca091e Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.207228 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhnfb\" (UniqueName: \"kubernetes.io/projected/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-kube-api-access-zhnfb\") pod \"oauth-openshift-558db77b4-745nn\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.214703 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.223128 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.225475 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lbcxh"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.226411 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d939f2bb-c256-40c3-96de-f3cf0d53c3b0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gv8zn\" (UID: \"d939f2bb-c256-40c3-96de-f3cf0d53c3b0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" Nov 24 17:51:42 crc kubenswrapper[4768]: W1124 17:51:42.254988 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399d5dbd_8565_4557_b593_f7c1ca2abcf5.slice/crio-a5e4b793b406772502b6b24c270f58b452dd9300436d1a9f3b842a11c8dc4a24 WatchSource:0}: Error finding container a5e4b793b406772502b6b24c270f58b452dd9300436d1a9f3b842a11c8dc4a24: Status 404 returned error can't find the container with id a5e4b793b406772502b6b24c270f58b452dd9300436d1a9f3b842a11c8dc4a24 Nov 24 17:51:42 crc kubenswrapper[4768]: W1124 17:51:42.256139 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f01642d_b03b_4448_9152_9285d7ca0a6c.slice/crio-7fe79666f8581ecc3b743009cdb81d6dc350609b6c207b59277038333a255450 WatchSource:0}: Error finding container 7fe79666f8581ecc3b743009cdb81d6dc350609b6c207b59277038333a255450: Status 404 returned error can't find the container with id 7fe79666f8581ecc3b743009cdb81d6dc350609b6c207b59277038333a255450 Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.258193 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg5nw\" (UniqueName: \"kubernetes.io/projected/3f7d3e72-29f7-417e-9b42-b13c93e56f46-kube-api-access-mg5nw\") pod \"migrator-59844c95c7-j6bxp\" (UID: \"3f7d3e72-29f7-417e-9b42-b13c93e56f46\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j6bxp" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.288166 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fgt8t"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.289285 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qmdc\" (UniqueName: \"kubernetes.io/projected/41284f0b-a93d-49a3-bfbc-1f0aeae13cdc-kube-api-access-7qmdc\") pod \"service-ca-operator-777779d784-9brwg\" (UID: \"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.304313 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v8v5f"] Nov 24 17:51:42 crc kubenswrapper[4768]: W1124 17:51:42.319735 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73cd8533_3450_46e3_89b9_6dd092750ef9.slice/crio-263e86658577ac81208218cfeeb1b3e57699a0fa347f800c8c72a0b8d9e218e4 WatchSource:0}: Error finding container 263e86658577ac81208218cfeeb1b3e57699a0fa347f800c8c72a0b8d9e218e4: Status 404 returned error can't find the container with id 263e86658577ac81208218cfeeb1b3e57699a0fa347f800c8c72a0b8d9e218e4 Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.321832 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mk2n\" (UniqueName: \"kubernetes.io/projected/798036f8-c88a-4293-85f7-59946faf2a71-kube-api-access-2mk2n\") pod \"olm-operator-6b444d44fb-nd7rd\" (UID: \"798036f8-c88a-4293-85f7-59946faf2a71\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.333725 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.345856 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkfxk\" (UniqueName: \"kubernetes.io/projected/5b91c837-cd56-4b9a-b69e-7bc008877eb9-kube-api-access-pkfxk\") pod \"cluster-image-registry-operator-dc59b4c8b-w74jp\" (UID: \"5b91c837-cd56-4b9a-b69e-7bc008877eb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.364122 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpwz7\" (UniqueName: \"kubernetes.io/projected/b915353f-fcb8-4d2c-841f-a2091f2c7d96-kube-api-access-dpwz7\") pod \"router-default-5444994796-8hvbs\" (UID: \"b915353f-fcb8-4d2c-841f-a2091f2c7d96\") " pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.376045 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tj982"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.377905 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ldkx\" (UniqueName: \"kubernetes.io/projected/c39e586f-224c-4428-9114-1accf92dc1d4-kube-api-access-4ldkx\") pod \"multus-admission-controller-857f4d67dd-96ff4\" (UID: \"c39e586f-224c-4428-9114-1accf92dc1d4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-96ff4" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.391259 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnw8b\" (UniqueName: \"kubernetes.io/projected/9b8d6985-79fe-4be9-a7e3-5c762214d678-kube-api-access-xnw8b\") pod \"marketplace-operator-79b997595-6zm9x\" (UID: \"9b8d6985-79fe-4be9-a7e3-5c762214d678\") " pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.414005 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b91c837-cd56-4b9a-b69e-7bc008877eb9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w74jp\" (UID: \"5b91c837-cd56-4b9a-b69e-7bc008877eb9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.415443 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.421747 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gx45l" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.428168 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.432190 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmf8r\" (UniqueName: \"kubernetes.io/projected/4b6167fc-ef32-4514-aa36-75ac504c9393-kube-api-access-kmf8r\") pod \"kube-storage-version-migrator-operator-b67b599dd-2blh5\" (UID: \"4b6167fc-ef32-4514-aa36-75ac504c9393\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.450834 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.452172 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.456061 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.456988 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j6bxp" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.457034 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmg9z\" (UniqueName: \"kubernetes.io/projected/622d16ca-1d8c-49e7-8ad7-c7b33b9003f2-kube-api-access-lmg9z\") pod \"control-plane-machine-set-operator-78cbb6b69f-rrct4\" (UID: \"622d16ca-1d8c-49e7-8ad7-c7b33b9003f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.460597 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.468570 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5sdcl"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.478945 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.488153 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.509144 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-96ff4" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.515907 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" Nov 24 17:51:42 crc kubenswrapper[4768]: W1124 17:51:42.520016 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod745e1125_670f_4e6e_acf0_e1206cf06a8e.slice/crio-830c74a8972693b6a6822c2e7606343288136ce6ad81ce0c8127f1a1c32e1558 WatchSource:0}: Error finding container 830c74a8972693b6a6822c2e7606343288136ce6ad81ce0c8127f1a1c32e1558: Status 404 returned error can't find the container with id 830c74a8972693b6a6822c2e7606343288136ce6ad81ce0c8127f1a1c32e1558 Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.523095 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:42 crc kubenswrapper[4768]: W1124 17:51:42.524167 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9439073f_3757_4e0b_959d_fd0c1294ad75.slice/crio-62efc240089c0a2da7749b823398e161931b4b117f29dd91e6548f06f9acdeb3 WatchSource:0}: Error finding container 62efc240089c0a2da7749b823398e161931b4b117f29dd91e6548f06f9acdeb3: Status 404 returned error can't find the container with id 62efc240089c0a2da7749b823398e161931b4b117f29dd91e6548f06f9acdeb3 Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.529038 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531005 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bd473c0-17b2-4d7c-830a-99afe5266762-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531046 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk7t6\" (UniqueName: \"kubernetes.io/projected/b824dba7-d50a-4972-ba6f-49ee0fb30604-kube-api-access-qk7t6\") pod \"collect-profiles-29400105-t4h2q\" (UID: \"b824dba7-d50a-4972-ba6f-49ee0fb30604\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531070 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qfg\" (UniqueName: \"kubernetes.io/projected/43eb5e3a-3bc8-4437-a94c-e327666e2db3-kube-api-access-46qfg\") pod \"catalog-operator-68c6474976-n8bw2\" (UID: \"43eb5e3a-3bc8-4437-a94c-e327666e2db3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531127 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-registry-tls\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531154 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ac86e85-7038-49ec-977e-e27bad8a5d26-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ql6wq\" (UID: \"0ac86e85-7038-49ec-977e-e27bad8a5d26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531218 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bd473c0-17b2-4d7c-830a-99afe5266762-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531236 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6krfx\" (UniqueName: \"kubernetes.io/projected/25bd0ae5-28d7-4466-bfb1-e22d25dbc966-kube-api-access-6krfx\") pod \"service-ca-9c57cc56f-6stph\" (UID: \"25bd0ae5-28d7-4466-bfb1-e22d25dbc966\") " pod="openshift-service-ca/service-ca-9c57cc56f-6stph" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531252 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxcg\" (UniqueName: \"kubernetes.io/projected/1dbfc132-bce0-4046-90a6-7cdac7abfe8c-kube-api-access-mfxcg\") pod \"package-server-manager-789f6589d5-7frgb\" (UID: \"1dbfc132-bce0-4046-90a6-7cdac7abfe8c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531267 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/43eb5e3a-3bc8-4437-a94c-e327666e2db3-srv-cert\") pod \"catalog-operator-68c6474976-n8bw2\" (UID: \"43eb5e3a-3bc8-4437-a94c-e327666e2db3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531313 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js8zk\" (UniqueName: \"kubernetes.io/projected/0ac86e85-7038-49ec-977e-e27bad8a5d26-kube-api-access-js8zk\") pod \"machine-config-operator-74547568cd-ql6wq\" (UID: \"0ac86e85-7038-49ec-977e-e27bad8a5d26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531353 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n25cm\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-kube-api-access-n25cm\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531381 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/43eb5e3a-3bc8-4437-a94c-e327666e2db3-profile-collector-cert\") pod \"catalog-operator-68c6474976-n8bw2\" (UID: \"43eb5e3a-3bc8-4437-a94c-e327666e2db3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531404 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3288b5aa-f73a-49d8-8714-40cfc23c34c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6hp8w\" (UID: \"3288b5aa-f73a-49d8-8714-40cfc23c34c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531420 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f425\" (UniqueName: \"kubernetes.io/projected/6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2-kube-api-access-7f425\") pod \"machine-config-server-hwbdt\" (UID: \"6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2\") " pod="openshift-machine-config-operator/machine-config-server-hwbdt" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531435 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b824dba7-d50a-4972-ba6f-49ee0fb30604-config-volume\") pod \"collect-profiles-29400105-t4h2q\" (UID: \"b824dba7-d50a-4972-ba6f-49ee0fb30604\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531480 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2-certs\") pod \"machine-config-server-hwbdt\" (UID: \"6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2\") " pod="openshift-machine-config-operator/machine-config-server-hwbdt" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531538 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25bd0ae5-28d7-4466-bfb1-e22d25dbc966-signing-cabundle\") pod \"service-ca-9c57cc56f-6stph\" (UID: \"25bd0ae5-28d7-4466-bfb1-e22d25dbc966\") " pod="openshift-service-ca/service-ca-9c57cc56f-6stph" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531595 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9-apiservice-cert\") pod \"packageserver-d55dfcdfc-krvz2\" (UID: \"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531662 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcmqj\" (UniqueName: \"kubernetes.io/projected/3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9-kube-api-access-jcmqj\") pod \"packageserver-d55dfcdfc-krvz2\" (UID: \"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531933 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2-node-bootstrap-token\") pod \"machine-config-server-hwbdt\" (UID: \"6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2\") " pod="openshift-machine-config-operator/machine-config-server-hwbdt" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.531981 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9-webhook-cert\") pod \"packageserver-d55dfcdfc-krvz2\" (UID: \"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.532001 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0ac86e85-7038-49ec-977e-e27bad8a5d26-proxy-tls\") pod \"machine-config-operator-74547568cd-ql6wq\" (UID: \"0ac86e85-7038-49ec-977e-e27bad8a5d26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.532440 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bd473c0-17b2-4d7c-830a-99afe5266762-registry-certificates\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.533941 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3288b5aa-f73a-49d8-8714-40cfc23c34c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6hp8w\" (UID: \"3288b5aa-f73a-49d8-8714-40cfc23c34c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.534166 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dbfc132-bce0-4046-90a6-7cdac7abfe8c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7frgb\" (UID: \"1dbfc132-bce0-4046-90a6-7cdac7abfe8c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.534744 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bd473c0-17b2-4d7c-830a-99afe5266762-trusted-ca\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.534780 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.534910 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-bound-sa-token\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.535141 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b824dba7-d50a-4972-ba6f-49ee0fb30604-secret-volume\") pod \"collect-profiles-29400105-t4h2q\" (UID: \"b824dba7-d50a-4972-ba6f-49ee0fb30604\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.535220 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ac86e85-7038-49ec-977e-e27bad8a5d26-images\") pod \"machine-config-operator-74547568cd-ql6wq\" (UID: \"0ac86e85-7038-49ec-977e-e27bad8a5d26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: E1124 17:51:42.535248 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.035235682 +0000 UTC m=+141.895817459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.535318 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25bd0ae5-28d7-4466-bfb1-e22d25dbc966-signing-key\") pod \"service-ca-9c57cc56f-6stph\" (UID: \"25bd0ae5-28d7-4466-bfb1-e22d25dbc966\") " pod="openshift-service-ca/service-ca-9c57cc56f-6stph" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.535434 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3288b5aa-f73a-49d8-8714-40cfc23c34c2-config\") pod \"kube-controller-manager-operator-78b949d7b-6hp8w\" (UID: \"3288b5aa-f73a-49d8-8714-40cfc23c34c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.535759 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9-tmpfs\") pod \"packageserver-d55dfcdfc-krvz2\" (UID: \"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.536214 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.546566 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.585876 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.639290 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.639445 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dbfc132-bce0-4046-90a6-7cdac7abfe8c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7frgb\" (UID: \"1dbfc132-bce0-4046-90a6-7cdac7abfe8c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" Nov 24 17:51:42 crc kubenswrapper[4768]: E1124 17:51:42.639559 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.139456029 +0000 UTC m=+142.000037816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.639672 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae2d2ccc-2d97-4992-a879-286f628bb1b0-config-volume\") pod \"dns-default-fph7m\" (UID: \"ae2d2ccc-2d97-4992-a879-286f628bb1b0\") " pod="openshift-dns/dns-default-fph7m" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.639775 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bd473c0-17b2-4d7c-830a-99afe5266762-trusted-ca\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.639816 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.639847 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-bound-sa-token\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.639886 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b824dba7-d50a-4972-ba6f-49ee0fb30604-secret-volume\") pod \"collect-profiles-29400105-t4h2q\" (UID: \"b824dba7-d50a-4972-ba6f-49ee0fb30604\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.639927 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ac86e85-7038-49ec-977e-e27bad8a5d26-images\") pod \"machine-config-operator-74547568cd-ql6wq\" (UID: \"0ac86e85-7038-49ec-977e-e27bad8a5d26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.639996 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25bd0ae5-28d7-4466-bfb1-e22d25dbc966-signing-key\") pod \"service-ca-9c57cc56f-6stph\" (UID: \"25bd0ae5-28d7-4466-bfb1-e22d25dbc966\") " pod="openshift-service-ca/service-ca-9c57cc56f-6stph" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640053 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3288b5aa-f73a-49d8-8714-40cfc23c34c2-config\") pod \"kube-controller-manager-operator-78b949d7b-6hp8w\" (UID: \"3288b5aa-f73a-49d8-8714-40cfc23c34c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640082 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9-tmpfs\") pod \"packageserver-d55dfcdfc-krvz2\" (UID: \"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640114 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-plugins-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640138 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-socket-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640162 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-registration-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640187 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5prh\" (UniqueName: \"kubernetes.io/projected/b9a660e1-b6fc-40e7-a6d9-587f312ea140-kube-api-access-r5prh\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640235 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-csi-data-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640260 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bd473c0-17b2-4d7c-830a-99afe5266762-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640305 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk7t6\" (UniqueName: \"kubernetes.io/projected/b824dba7-d50a-4972-ba6f-49ee0fb30604-kube-api-access-qk7t6\") pod \"collect-profiles-29400105-t4h2q\" (UID: \"b824dba7-d50a-4972-ba6f-49ee0fb30604\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640351 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46qfg\" (UniqueName: \"kubernetes.io/projected/43eb5e3a-3bc8-4437-a94c-e327666e2db3-kube-api-access-46qfg\") pod \"catalog-operator-68c6474976-n8bw2\" (UID: \"43eb5e3a-3bc8-4437-a94c-e327666e2db3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640396 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwnlz\" (UniqueName: \"kubernetes.io/projected/a4adc93d-6aca-4166-a87d-1c8c13f72293-kube-api-access-zwnlz\") pod \"ingress-canary-qk7vd\" (UID: \"a4adc93d-6aca-4166-a87d-1c8c13f72293\") " pod="openshift-ingress-canary/ingress-canary-qk7vd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640434 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-registry-tls\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640475 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ac86e85-7038-49ec-977e-e27bad8a5d26-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ql6wq\" (UID: \"0ac86e85-7038-49ec-977e-e27bad8a5d26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640556 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bd473c0-17b2-4d7c-830a-99afe5266762-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640580 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6krfx\" (UniqueName: \"kubernetes.io/projected/25bd0ae5-28d7-4466-bfb1-e22d25dbc966-kube-api-access-6krfx\") pod \"service-ca-9c57cc56f-6stph\" (UID: \"25bd0ae5-28d7-4466-bfb1-e22d25dbc966\") " pod="openshift-service-ca/service-ca-9c57cc56f-6stph" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640603 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxcg\" (UniqueName: \"kubernetes.io/projected/1dbfc132-bce0-4046-90a6-7cdac7abfe8c-kube-api-access-mfxcg\") pod \"package-server-manager-789f6589d5-7frgb\" (UID: \"1dbfc132-bce0-4046-90a6-7cdac7abfe8c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.640628 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/43eb5e3a-3bc8-4437-a94c-e327666e2db3-srv-cert\") pod \"catalog-operator-68c6474976-n8bw2\" (UID: \"43eb5e3a-3bc8-4437-a94c-e327666e2db3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:42 crc kubenswrapper[4768]: E1124 17:51:42.640697 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.14068091 +0000 UTC m=+142.001262687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.646826 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dbfc132-bce0-4046-90a6-7cdac7abfe8c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7frgb\" (UID: \"1dbfc132-bce0-4046-90a6-7cdac7abfe8c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.647287 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bd473c0-17b2-4d7c-830a-99afe5266762-trusted-ca\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.647799 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9-tmpfs\") pod \"packageserver-d55dfcdfc-krvz2\" (UID: \"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.647952 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3288b5aa-f73a-49d8-8714-40cfc23c34c2-config\") pod \"kube-controller-manager-operator-78b949d7b-6hp8w\" (UID: \"3288b5aa-f73a-49d8-8714-40cfc23c34c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.655428 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ac86e85-7038-49ec-977e-e27bad8a5d26-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ql6wq\" (UID: \"0ac86e85-7038-49ec-977e-e27bad8a5d26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.655891 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ac86e85-7038-49ec-977e-e27bad8a5d26-images\") pod \"machine-config-operator-74547568cd-ql6wq\" (UID: \"0ac86e85-7038-49ec-977e-e27bad8a5d26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.657387 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js8zk\" (UniqueName: \"kubernetes.io/projected/0ac86e85-7038-49ec-977e-e27bad8a5d26-kube-api-access-js8zk\") pod \"machine-config-operator-74547568cd-ql6wq\" (UID: \"0ac86e85-7038-49ec-977e-e27bad8a5d26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.657446 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4c2n\" (UniqueName: \"kubernetes.io/projected/ae2d2ccc-2d97-4992-a879-286f628bb1b0-kube-api-access-m4c2n\") pod \"dns-default-fph7m\" (UID: \"ae2d2ccc-2d97-4992-a879-286f628bb1b0\") " pod="openshift-dns/dns-default-fph7m" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.657538 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n25cm\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-kube-api-access-n25cm\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.657785 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/43eb5e3a-3bc8-4437-a94c-e327666e2db3-profile-collector-cert\") pod \"catalog-operator-68c6474976-n8bw2\" (UID: \"43eb5e3a-3bc8-4437-a94c-e327666e2db3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.657895 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3288b5aa-f73a-49d8-8714-40cfc23c34c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6hp8w\" (UID: \"3288b5aa-f73a-49d8-8714-40cfc23c34c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.657954 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f425\" (UniqueName: \"kubernetes.io/projected/6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2-kube-api-access-7f425\") pod \"machine-config-server-hwbdt\" (UID: \"6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2\") " pod="openshift-machine-config-operator/machine-config-server-hwbdt" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.662363 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bd473c0-17b2-4d7c-830a-99afe5266762-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.662695 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b824dba7-d50a-4972-ba6f-49ee0fb30604-config-volume\") pod \"collect-profiles-29400105-t4h2q\" (UID: \"b824dba7-d50a-4972-ba6f-49ee0fb30604\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.663247 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25bd0ae5-28d7-4466-bfb1-e22d25dbc966-signing-key\") pod \"service-ca-9c57cc56f-6stph\" (UID: \"25bd0ae5-28d7-4466-bfb1-e22d25dbc966\") " pod="openshift-service-ca/service-ca-9c57cc56f-6stph" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.663530 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b824dba7-d50a-4972-ba6f-49ee0fb30604-secret-volume\") pod \"collect-profiles-29400105-t4h2q\" (UID: \"b824dba7-d50a-4972-ba6f-49ee0fb30604\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.663684 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bd473c0-17b2-4d7c-830a-99afe5266762-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.663796 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2-certs\") pod \"machine-config-server-hwbdt\" (UID: \"6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2\") " pod="openshift-machine-config-operator/machine-config-server-hwbdt" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.664219 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3288b5aa-f73a-49d8-8714-40cfc23c34c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6hp8w\" (UID: \"3288b5aa-f73a-49d8-8714-40cfc23c34c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.664300 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae2d2ccc-2d97-4992-a879-286f628bb1b0-metrics-tls\") pod \"dns-default-fph7m\" (UID: \"ae2d2ccc-2d97-4992-a879-286f628bb1b0\") " pod="openshift-dns/dns-default-fph7m" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.664341 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-mountpoint-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.664405 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25bd0ae5-28d7-4466-bfb1-e22d25dbc966-signing-cabundle\") pod \"service-ca-9c57cc56f-6stph\" (UID: \"25bd0ae5-28d7-4466-bfb1-e22d25dbc966\") " pod="openshift-service-ca/service-ca-9c57cc56f-6stph" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.664455 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2-node-bootstrap-token\") pod \"machine-config-server-hwbdt\" (UID: \"6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2\") " pod="openshift-machine-config-operator/machine-config-server-hwbdt" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.664508 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b824dba7-d50a-4972-ba6f-49ee0fb30604-config-volume\") pod \"collect-profiles-29400105-t4h2q\" (UID: \"b824dba7-d50a-4972-ba6f-49ee0fb30604\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.664574 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9-apiservice-cert\") pod \"packageserver-d55dfcdfc-krvz2\" (UID: \"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.664614 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcmqj\" (UniqueName: \"kubernetes.io/projected/3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9-kube-api-access-jcmqj\") pod \"packageserver-d55dfcdfc-krvz2\" (UID: \"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.668999 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-lbcxh" event={"ID":"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9","Type":"ContainerStarted","Data":"00f29c9f3eca5c5e015230217d79aba7f0702c22170630753adae013e367fef1"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.669048 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-lbcxh" event={"ID":"e790bb9a-6948-438a-8d6e-b8a9db1e2aa9","Type":"ContainerStarted","Data":"5502d2caef143e192a506ab8a5a5ec96be427fe50d5773ffcdbe846d7fe1f889"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.670347 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25bd0ae5-28d7-4466-bfb1-e22d25dbc966-signing-cabundle\") pod \"service-ca-9c57cc56f-6stph\" (UID: \"25bd0ae5-28d7-4466-bfb1-e22d25dbc966\") " pod="openshift-service-ca/service-ca-9c57cc56f-6stph" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.672216 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.675058 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9-webhook-cert\") pod \"packageserver-d55dfcdfc-krvz2\" (UID: \"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.676650 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0ac86e85-7038-49ec-977e-e27bad8a5d26-proxy-tls\") pod \"machine-config-operator-74547568cd-ql6wq\" (UID: \"0ac86e85-7038-49ec-977e-e27bad8a5d26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.677059 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bd473c0-17b2-4d7c-830a-99afe5266762-registry-certificates\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.678059 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gx45l"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.678956 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" event={"ID":"f98cf38a-e904-4b11-bd9a-bb558bc603ae","Type":"ContainerStarted","Data":"faeb68c89f6441ac4fcc0de408bf0f2d5ed43cd93b8a4fb25e09e64541df0c4c"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.680338 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4adc93d-6aca-4166-a87d-1c8c13f72293-cert\") pod \"ingress-canary-qk7vd\" (UID: \"a4adc93d-6aca-4166-a87d-1c8c13f72293\") " pod="openshift-ingress-canary/ingress-canary-qk7vd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.680394 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3288b5aa-f73a-49d8-8714-40cfc23c34c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6hp8w\" (UID: \"3288b5aa-f73a-49d8-8714-40cfc23c34c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.681345 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-registry-tls\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.681466 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2-node-bootstrap-token\") pod \"machine-config-server-hwbdt\" (UID: \"6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2\") " pod="openshift-machine-config-operator/machine-config-server-hwbdt" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.681566 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tj982" event={"ID":"920a0317-09dd-43e5-b5a9-11feb6d3b37d","Type":"ContainerStarted","Data":"89b6eea92acbe274aa5ec5dd37fbc85a0397147f838d568dfe02011bbcbfcf06"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.685654 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" event={"ID":"f4312574-3ae8-49f4-a799-e20198b71149","Type":"ContainerStarted","Data":"edc8e21a00874bb245271efaa17900d5b80e76a0bddbc326d63dafadc5cca6a1"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.685691 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" event={"ID":"f4312574-3ae8-49f4-a799-e20198b71149","Type":"ContainerStarted","Data":"f7b1f544da866450c5708085d28ef4845c873f981fdb09fb7d267c6090ca091e"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.686528 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" event={"ID":"745e1125-670f-4e6e-acf0-e1206cf06a8e","Type":"ContainerStarted","Data":"830c74a8972693b6a6822c2e7606343288136ce6ad81ce0c8127f1a1c32e1558"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.687556 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9-apiservice-cert\") pod \"packageserver-d55dfcdfc-krvz2\" (UID: \"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.687968 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9-webhook-cert\") pod \"packageserver-d55dfcdfc-krvz2\" (UID: \"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.691936 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2-certs\") pod \"machine-config-server-hwbdt\" (UID: \"6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2\") " pod="openshift-machine-config-operator/machine-config-server-hwbdt" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.692471 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bd473c0-17b2-4d7c-830a-99afe5266762-registry-certificates\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.693296 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/43eb5e3a-3bc8-4437-a94c-e327666e2db3-profile-collector-cert\") pod \"catalog-operator-68c6474976-n8bw2\" (UID: \"43eb5e3a-3bc8-4437-a94c-e327666e2db3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.695058 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk7t6\" (UniqueName: \"kubernetes.io/projected/b824dba7-d50a-4972-ba6f-49ee0fb30604-kube-api-access-qk7t6\") pod \"collect-profiles-29400105-t4h2q\" (UID: \"b824dba7-d50a-4972-ba6f-49ee0fb30604\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.697353 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-bound-sa-token\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.702880 4768 patch_prober.go:28] interesting pod/console-operator-58897d9998-lbcxh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.702974 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-lbcxh" podUID="e790bb9a-6948-438a-8d6e-b8a9db1e2aa9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.703254 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0ac86e85-7038-49ec-977e-e27bad8a5d26-proxy-tls\") pod \"machine-config-operator-74547568cd-ql6wq\" (UID: \"0ac86e85-7038-49ec-977e-e27bad8a5d26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.703430 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/43eb5e3a-3bc8-4437-a94c-e327666e2db3-srv-cert\") pod \"catalog-operator-68c6474976-n8bw2\" (UID: \"43eb5e3a-3bc8-4437-a94c-e327666e2db3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.705560 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" event={"ID":"ffc096e2-e012-44f8-bfad-3d48cc621cc9","Type":"ContainerStarted","Data":"19a55a6e5a36627a1639754800f6e035e13ae97ba55993a0cd74a5fcb50a6b79"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.705612 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" event={"ID":"ffc096e2-e012-44f8-bfad-3d48cc621cc9","Type":"ContainerStarted","Data":"e904b025cdb55e6b7149c4fab1d65c179bd8fefe1e2f6e65342242b1407a7a03"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.705627 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" event={"ID":"ffc096e2-e012-44f8-bfad-3d48cc621cc9","Type":"ContainerStarted","Data":"86a8c9223fcf10c38c18e365f9d5d5cebd98890e8e5a420699f5ef90bddafbf1"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.711462 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.711716 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6krfx\" (UniqueName: \"kubernetes.io/projected/25bd0ae5-28d7-4466-bfb1-e22d25dbc966-kube-api-access-6krfx\") pod \"service-ca-9c57cc56f-6stph\" (UID: \"25bd0ae5-28d7-4466-bfb1-e22d25dbc966\") " pod="openshift-service-ca/service-ca-9c57cc56f-6stph" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.714186 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" event={"ID":"df08e410-ea02-4bf7-8330-d0530b2c08b5","Type":"ContainerStarted","Data":"2a2f9b5d85566ca4d5574acebc98a68303bcdc7365c246cfe83463a383e3481a"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.714222 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" event={"ID":"df08e410-ea02-4bf7-8330-d0530b2c08b5","Type":"ContainerStarted","Data":"4619ce1363586919481cc3d54159b704e17286c31c7b2626e95b51ca9959a3fe"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.714916 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.718614 4768 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mwfrc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.718776 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" podUID="df08e410-ea02-4bf7-8330-d0530b2c08b5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.725048 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" event={"ID":"6f01642d-b03b-4448-9152-9285d7ca0a6c","Type":"ContainerStarted","Data":"d3d48700dc474fb8b30103e3d3577b4037a3ad470f47835e13f9b154f2639ea7"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.725106 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" event={"ID":"6f01642d-b03b-4448-9152-9285d7ca0a6c","Type":"ContainerStarted","Data":"7fe79666f8581ecc3b743009cdb81d6dc350609b6c207b59277038333a255450"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.727994 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99" event={"ID":"809a0417-e4ae-4f20-b068-90d7ce5f8617","Type":"ContainerStarted","Data":"7e91f5697970f084ed462f4821b3fa4771aceba57135960966a9cb9b55b569b5"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.728030 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99" event={"ID":"809a0417-e4ae-4f20-b068-90d7ce5f8617","Type":"ContainerStarted","Data":"850ec57aef51770836a045479bbf94634ef537d697138eab91f68ad348b32c4e"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.732322 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46qfg\" (UniqueName: \"kubernetes.io/projected/43eb5e3a-3bc8-4437-a94c-e327666e2db3-kube-api-access-46qfg\") pod \"catalog-operator-68c6474976-n8bw2\" (UID: \"43eb5e3a-3bc8-4437-a94c-e327666e2db3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.736158 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" event={"ID":"e322e474-b6fd-43ec-a7f4-8680a5b02172","Type":"ContainerStarted","Data":"28b3b38416105aea67061529d6155196d0b0201b023cbe16d1fc85288ae261ee"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.743113 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" event={"ID":"812c8c26-80fa-4bc3-892c-d101746601c0","Type":"ContainerStarted","Data":"c9725e71b98303a1e1bb1acfcc15aaa61144fb47c5fe1e244f872d0ff3e41de1"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.745033 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" event={"ID":"eaea92fe-c8a2-45e7-892e-e7897060eae4","Type":"ContainerStarted","Data":"8a9901caec75de40d447e7a5c4de94cc3d0b01cdf9925f6719e9d43e6bd9348a"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.755601 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fgt8t" event={"ID":"73cd8533-3450-46e3-89b9-6dd092750ef9","Type":"ContainerStarted","Data":"32feb2f5f4021a4b47fb4c3b326da89440c02e4151cec02e69e39cc77650d1e1"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.755666 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fgt8t" event={"ID":"73cd8533-3450-46e3-89b9-6dd092750ef9","Type":"ContainerStarted","Data":"263e86658577ac81208218cfeeb1b3e57699a0fa347f800c8c72a0b8d9e218e4"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.756648 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fgt8t" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.757232 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js8zk\" (UniqueName: \"kubernetes.io/projected/0ac86e85-7038-49ec-977e-e27bad8a5d26-kube-api-access-js8zk\") pod \"machine-config-operator-74547568cd-ql6wq\" (UID: \"0ac86e85-7038-49ec-977e-e27bad8a5d26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.766376 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgt8t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.766411 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fgt8t" podUID="73cd8533-3450-46e3-89b9-6dd092750ef9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.776568 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n25cm\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-kube-api-access-n25cm\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.783252 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.783405 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4adc93d-6aca-4166-a87d-1c8c13f72293-cert\") pod \"ingress-canary-qk7vd\" (UID: \"a4adc93d-6aca-4166-a87d-1c8c13f72293\") " pod="openshift-ingress-canary/ingress-canary-qk7vd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.783455 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae2d2ccc-2d97-4992-a879-286f628bb1b0-config-volume\") pod \"dns-default-fph7m\" (UID: \"ae2d2ccc-2d97-4992-a879-286f628bb1b0\") " pod="openshift-dns/dns-default-fph7m" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.783616 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-plugins-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.783640 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-socket-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.783655 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-registration-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.783671 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5prh\" (UniqueName: \"kubernetes.io/projected/b9a660e1-b6fc-40e7-a6d9-587f312ea140-kube-api-access-r5prh\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.783688 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-csi-data-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.783717 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwnlz\" (UniqueName: \"kubernetes.io/projected/a4adc93d-6aca-4166-a87d-1c8c13f72293-kube-api-access-zwnlz\") pod \"ingress-canary-qk7vd\" (UID: \"a4adc93d-6aca-4166-a87d-1c8c13f72293\") " pod="openshift-ingress-canary/ingress-canary-qk7vd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.783794 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4c2n\" (UniqueName: \"kubernetes.io/projected/ae2d2ccc-2d97-4992-a879-286f628bb1b0-kube-api-access-m4c2n\") pod \"dns-default-fph7m\" (UID: \"ae2d2ccc-2d97-4992-a879-286f628bb1b0\") " pod="openshift-dns/dns-default-fph7m" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.783845 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae2d2ccc-2d97-4992-a879-286f628bb1b0-metrics-tls\") pod \"dns-default-fph7m\" (UID: \"ae2d2ccc-2d97-4992-a879-286f628bb1b0\") " pod="openshift-dns/dns-default-fph7m" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.783859 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-mountpoint-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: E1124 17:51:42.786219 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.286203103 +0000 UTC m=+142.146784880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.786314 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-socket-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.786463 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-mountpoint-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.786646 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-plugins-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.786887 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-registration-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.787032 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b9a660e1-b6fc-40e7-a6d9-587f312ea140-csi-data-dir\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.788280 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae2d2ccc-2d97-4992-a879-286f628bb1b0-config-volume\") pod \"dns-default-fph7m\" (UID: \"ae2d2ccc-2d97-4992-a879-286f628bb1b0\") " pod="openshift-dns/dns-default-fph7m" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.790061 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" event={"ID":"399d5dbd-8565-4557-b593-f7c1ca2abcf5","Type":"ContainerStarted","Data":"a5e4b793b406772502b6b24c270f58b452dd9300436d1a9f3b842a11c8dc4a24"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.790523 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f425\" (UniqueName: \"kubernetes.io/projected/6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2-kube-api-access-7f425\") pod \"machine-config-server-hwbdt\" (UID: \"6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2\") " pod="openshift-machine-config-operator/machine-config-server-hwbdt" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.794503 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4adc93d-6aca-4166-a87d-1c8c13f72293-cert\") pod \"ingress-canary-qk7vd\" (UID: \"a4adc93d-6aca-4166-a87d-1c8c13f72293\") " pod="openshift-ingress-canary/ingress-canary-qk7vd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.797729 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" event={"ID":"097861b9-f639-4e44-a54e-ae798f106ef0","Type":"ContainerStarted","Data":"404f19099be914d24160ae3d8b19db043425475ed03848d06c7b7e2ac7af5077"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.797778 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" event={"ID":"097861b9-f639-4e44-a54e-ae798f106ef0","Type":"ContainerStarted","Data":"b622ee901b2488c793480709d0545aeb3acc5fb5fe8f8574e752338d3a6c50e2"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.799558 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.802338 4768 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-p4n49 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.802582 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" podUID="097861b9-f639-4e44-a54e-ae798f106ef0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.802961 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" event={"ID":"9439073f-3757-4e0b-959d-fd0c1294ad75","Type":"ContainerStarted","Data":"62efc240089c0a2da7749b823398e161931b4b117f29dd91e6548f06f9acdeb3"} Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.803062 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae2d2ccc-2d97-4992-a879-286f628bb1b0-metrics-tls\") pod \"dns-default-fph7m\" (UID: \"ae2d2ccc-2d97-4992-a879-286f628bb1b0\") " pod="openshift-dns/dns-default-fph7m" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.814364 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxcg\" (UniqueName: \"kubernetes.io/projected/1dbfc132-bce0-4046-90a6-7cdac7abfe8c-kube-api-access-mfxcg\") pod \"package-server-manager-789f6589d5-7frgb\" (UID: \"1dbfc132-bce0-4046-90a6-7cdac7abfe8c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.814544 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg"] Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.852451 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcmqj\" (UniqueName: \"kubernetes.io/projected/3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9-kube-api-access-jcmqj\") pod \"packageserver-d55dfcdfc-krvz2\" (UID: \"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.857334 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.862603 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.867636 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3288b5aa-f73a-49d8-8714-40cfc23c34c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6hp8w\" (UID: \"3288b5aa-f73a-49d8-8714-40cfc23c34c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.869477 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.887449 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.887856 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6stph" Nov 24 17:51:42 crc kubenswrapper[4768]: E1124 17:51:42.888560 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.388548582 +0000 UTC m=+142.249130359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:42 crc kubenswrapper[4768]: W1124 17:51:42.896659 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb915353f_fcb8_4d2c_841f_a2091f2c7d96.slice/crio-95e09f00d9d21a09efaf5247df5f43d63578968573b43ed7db6bbf3998364166 WatchSource:0}: Error finding container 95e09f00d9d21a09efaf5247df5f43d63578968573b43ed7db6bbf3998364166: Status 404 returned error can't find the container with id 95e09f00d9d21a09efaf5247df5f43d63578968573b43ed7db6bbf3998364166 Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.897432 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwnlz\" (UniqueName: \"kubernetes.io/projected/a4adc93d-6aca-4166-a87d-1c8c13f72293-kube-api-access-zwnlz\") pod \"ingress-canary-qk7vd\" (UID: \"a4adc93d-6aca-4166-a87d-1c8c13f72293\") " pod="openshift-ingress-canary/ingress-canary-qk7vd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.901168 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.908574 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5prh\" (UniqueName: \"kubernetes.io/projected/b9a660e1-b6fc-40e7-a6d9-587f312ea140-kube-api-access-r5prh\") pod \"csi-hostpathplugin-qd5vx\" (UID: \"b9a660e1-b6fc-40e7-a6d9-587f312ea140\") " pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.908824 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.917773 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.941406 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hwbdt" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.944288 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4c2n\" (UniqueName: \"kubernetes.io/projected/ae2d2ccc-2d97-4992-a879-286f628bb1b0-kube-api-access-m4c2n\") pod \"dns-default-fph7m\" (UID: \"ae2d2ccc-2d97-4992-a879-286f628bb1b0\") " pod="openshift-dns/dns-default-fph7m" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.948991 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fph7m" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.969192 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.976673 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qk7vd" Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.990381 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:42 crc kubenswrapper[4768]: E1124 17:51:42.990594 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.490566223 +0000 UTC m=+142.351148000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:42 crc kubenswrapper[4768]: I1124 17:51:42.990703 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:42 crc kubenswrapper[4768]: E1124 17:51:42.992776 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.49276263 +0000 UTC m=+142.353344407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.094783 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:43 crc kubenswrapper[4768]: E1124 17:51:43.094974 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.594943185 +0000 UTC m=+142.455524962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.095468 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:43 crc kubenswrapper[4768]: E1124 17:51:43.095867 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.595855908 +0000 UTC m=+142.456437685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.125794 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j6bxp"] Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.132756 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nxw22"] Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.197735 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:43 crc kubenswrapper[4768]: E1124 17:51:43.198238 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.698216687 +0000 UTC m=+142.558798464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:43 crc kubenswrapper[4768]: W1124 17:51:43.226192 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e752bf7_ed78_42c7_a76b_dcd9ca447ab5.slice/crio-481f59a6fc159e8a14ea67ae82e3894ba295678c0345898fc96ac7e32ae86571 WatchSource:0}: Error finding container 481f59a6fc159e8a14ea67ae82e3894ba295678c0345898fc96ac7e32ae86571: Status 404 returned error can't find the container with id 481f59a6fc159e8a14ea67ae82e3894ba295678c0345898fc96ac7e32ae86571 Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.321805 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:43 crc kubenswrapper[4768]: E1124 17:51:43.322081 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.822070391 +0000 UTC m=+142.682652168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.403093 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-745nn"] Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.423116 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:43 crc kubenswrapper[4768]: E1124 17:51:43.424694 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.924672346 +0000 UTC m=+142.785254123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.426575 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:43 crc kubenswrapper[4768]: E1124 17:51:43.426928 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:43.926915144 +0000 UTC m=+142.787496921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.532331 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:43 crc kubenswrapper[4768]: E1124 17:51:43.532585 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:44.032568059 +0000 UTC m=+142.893149836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.575748 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5"] Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.576808 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn"] Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.579871 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9brwg"] Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.596267 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4"] Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.614467 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6zm9x"] Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.617202 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-96ff4"] Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.634978 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:43 crc kubenswrapper[4768]: E1124 17:51:43.636470 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:44.136457787 +0000 UTC m=+142.997039554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.663904 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.664366 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.745762 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:43 crc kubenswrapper[4768]: E1124 17:51:43.745965 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:44.24595121 +0000 UTC m=+143.106532987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.761519 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp"] Nov 24 17:51:43 crc kubenswrapper[4768]: W1124 17:51:43.776886 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b91c837_cd56_4b9a_b69e_7bc008877eb9.slice/crio-8115b035c2008e78d1d93ab30ad94331ef651586d4039b5c437f58ca64c6c782 WatchSource:0}: Error finding container 8115b035c2008e78d1d93ab30ad94331ef651586d4039b5c437f58ca64c6c782: Status 404 returned error can't find the container with id 8115b035c2008e78d1d93ab30ad94331ef651586d4039b5c437f58ca64c6c782 Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.808382 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j6bxp" event={"ID":"3f7d3e72-29f7-417e-9b42-b13c93e56f46","Type":"ContainerStarted","Data":"7d333c68a8e6b4f9ac0f2f99b444e309233ae3b26d0e4eec12f3e6790d9992af"} Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.821225 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tj982" event={"ID":"920a0317-09dd-43e5-b5a9-11feb6d3b37d","Type":"ContainerStarted","Data":"6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179"} Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.826584 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2"] Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.846850 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:43 crc kubenswrapper[4768]: E1124 17:51:43.848635 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:44.348621048 +0000 UTC m=+143.209202825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.878660 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w"] Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.893047 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" event={"ID":"9439073f-3757-4e0b-959d-fd0c1294ad75","Type":"ContainerStarted","Data":"fc3a805268249161a962af850eda71d0e3aabf586d3039db7e3bbfd3f1e1112c"} Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.928355 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" event={"ID":"9b8d6985-79fe-4be9-a7e3-5c762214d678","Type":"ContainerStarted","Data":"aab1be0345c20098b90015c53a3eb20c79726a92eed6aa8f33ba121e023b5209"} Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.940991 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hwbdt" event={"ID":"6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2","Type":"ContainerStarted","Data":"1d17122fe29eb79f053e56f7156fccb5cc32c67ad21a96a12bb6cff0e84e8922"} Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.948827 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8hvbs" event={"ID":"b915353f-fcb8-4d2c-841f-a2091f2c7d96","Type":"ContainerStarted","Data":"95e09f00d9d21a09efaf5247df5f43d63578968573b43ed7db6bbf3998364166"} Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.950132 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:43 crc kubenswrapper[4768]: E1124 17:51:43.950410 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:44.450396072 +0000 UTC m=+143.310977849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.957150 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" event={"ID":"f98cf38a-e904-4b11-bd9a-bb558bc603ae","Type":"ContainerStarted","Data":"a67e93fe775c12e804a121c821ef8f67e223c59322f94785db98f98b1737a2b6"} Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.959670 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" event={"ID":"27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc","Type":"ContainerStarted","Data":"c401320b28d0cc82dd59b694586e9e8e86b740ecce9d2aae93c612cb7e14eed5"} Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.961572 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" event={"ID":"798036f8-c88a-4293-85f7-59946faf2a71","Type":"ContainerStarted","Data":"cfbfd4cd53282c4c8061c18094b07e75ea2fe25594b35c7e8ad533591b63a146"} Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.962770 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" event={"ID":"4b6167fc-ef32-4514-aa36-75ac504c9393","Type":"ContainerStarted","Data":"f25c0a675a0106f1ddbbd79df5b5a37f7b542ad2e2f65c3653b55d7d0973a88b"} Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.963868 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-96ff4" event={"ID":"c39e586f-224c-4428-9114-1accf92dc1d4","Type":"ContainerStarted","Data":"c32cbba216a27786d2b0c2b586d518bd2c758858f1d482fc7522f8dec7c7be41"} Nov 24 17:51:43 crc kubenswrapper[4768]: I1124 17:51:43.969438 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99" event={"ID":"809a0417-e4ae-4f20-b068-90d7ce5f8617","Type":"ContainerStarted","Data":"abf2970fbdc2ec97558bc9b0a1c563504b9b3087a91b8802b50b352a2520909b"} Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.035289 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6stph"] Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.043146 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" event={"ID":"5b91c837-cd56-4b9a-b69e-7bc008877eb9","Type":"ContainerStarted","Data":"8115b035c2008e78d1d93ab30ad94331ef651586d4039b5c437f58ca64c6c782"} Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.051153 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:44 crc kubenswrapper[4768]: E1124 17:51:44.051399 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:44.551387986 +0000 UTC m=+143.411969763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.051846 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qk7vd"] Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.056099 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2"] Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.056311 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" event={"ID":"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5","Type":"ContainerStarted","Data":"481f59a6fc159e8a14ea67ae82e3894ba295678c0345898fc96ac7e32ae86571"} Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.071005 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4" event={"ID":"622d16ca-1d8c-49e7-8ad7-c7b33b9003f2","Type":"ContainerStarted","Data":"90cdbdad98e4a7d4eaaa6ca3e4f59e61e0957f823f0354c9343388442b04e727"} Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.071857 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb"] Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.082196 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" event={"ID":"eaea92fe-c8a2-45e7-892e-e7897060eae4","Type":"ContainerStarted","Data":"e06ce8d86205f0eacc50dcfad8482815bfa7102f38c1744d39664378e657d8ec"} Nov 24 17:51:44 crc kubenswrapper[4768]: W1124 17:51:44.090729 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4adc93d_6aca_4166_a87d_1c8c13f72293.slice/crio-2d13c7217dff77bdadc96834e46eb92b00b6003a556b51230604d951c3426acb WatchSource:0}: Error finding container 2d13c7217dff77bdadc96834e46eb92b00b6003a556b51230604d951c3426acb: Status 404 returned error can't find the container with id 2d13c7217dff77bdadc96834e46eb92b00b6003a556b51230604d951c3426acb Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.103184 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" event={"ID":"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b","Type":"ContainerStarted","Data":"a49a761cef391e672338ab62f6ddf2a71a96ec9f2567c0537c0111edeb83fce1"} Nov 24 17:51:44 crc kubenswrapper[4768]: W1124 17:51:44.104690 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dbfc132_bce0_4046_90a6_7cdac7abfe8c.slice/crio-fdb50ecef4cec4c0837ee0e88057f48150d2c15a21cf0a995dd25bc252341716 WatchSource:0}: Error finding container fdb50ecef4cec4c0837ee0e88057f48150d2c15a21cf0a995dd25bc252341716: Status 404 returned error can't find the container with id fdb50ecef4cec4c0837ee0e88057f48150d2c15a21cf0a995dd25bc252341716 Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.118845 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" event={"ID":"f4312574-3ae8-49f4-a799-e20198b71149","Type":"ContainerStarted","Data":"9f077726f4d350507615b93b1539b50b259ef068d9059762957a94b0be81f315"} Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.126234 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" event={"ID":"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc","Type":"ContainerStarted","Data":"35bb49aebd1e373f5f8c58fb86bfea03e8313b5482cb47c5e761662ca4915121"} Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.128793 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" event={"ID":"d939f2bb-c256-40c3-96de-f3cf0d53c3b0","Type":"ContainerStarted","Data":"e11c362a64a15582292dcf9c643808ebf8890e236db36b57ca39020aa52d7f90"} Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.140731 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gx45l" event={"ID":"d08bd8b5-0113-45a9-b115-205e452b1481","Type":"ContainerStarted","Data":"22ef96a27a66076f1795bfea7daec94c2062f833ce79c4d8741d08aeb0eaa6e5"} Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.151978 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:44 crc kubenswrapper[4768]: E1124 17:51:44.152325 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:44.652309958 +0000 UTC m=+143.512891735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.153797 4768 generic.go:334] "Generic (PLEG): container finished" podID="e322e474-b6fd-43ec-a7f4-8680a5b02172" containerID="5e5c66ec690fd8b9d9161db91c55e78ab1ae8077ff85b50990d61b97e9c09d6e" exitCode=0 Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.153974 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" event={"ID":"e322e474-b6fd-43ec-a7f4-8680a5b02172","Type":"ContainerDied","Data":"5e5c66ec690fd8b9d9161db91c55e78ab1ae8077ff85b50990d61b97e9c09d6e"} Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.168396 4768 generic.go:334] "Generic (PLEG): container finished" podID="399d5dbd-8565-4557-b593-f7c1ca2abcf5" containerID="dde6e188fe9158f032aeaebbb6aa4e81ce63fe5502aba0b09be870ef78239a84" exitCode=0 Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.171192 4768 patch_prober.go:28] interesting pod/console-operator-58897d9998-lbcxh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.171237 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-lbcxh" podUID="e790bb9a-6948-438a-8d6e-b8a9db1e2aa9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.171652 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" event={"ID":"399d5dbd-8565-4557-b593-f7c1ca2abcf5","Type":"ContainerDied","Data":"dde6e188fe9158f032aeaebbb6aa4e81ce63fe5502aba0b09be870ef78239a84"} Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.173838 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgt8t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.173889 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fgt8t" podUID="73cd8533-3450-46e3-89b9-6dd092750ef9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.192271 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.197782 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.198519 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fph7m"] Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.225621 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q"] Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.233171 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq"] Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.237082 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qd5vx"] Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.253449 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:44 crc kubenswrapper[4768]: E1124 17:51:44.253875 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:44.753855726 +0000 UTC m=+143.614437573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:44 crc kubenswrapper[4768]: W1124 17:51:44.341600 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ac86e85_7038_49ec_977e_e27bad8a5d26.slice/crio-bfad349b1e472ec1a1634516469662441bfa8d8b5344a118199afd9f882e8f97 WatchSource:0}: Error finding container bfad349b1e472ec1a1634516469662441bfa8d8b5344a118199afd9f882e8f97: Status 404 returned error can't find the container with id bfad349b1e472ec1a1634516469662441bfa8d8b5344a118199afd9f882e8f97 Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.354307 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:44 crc kubenswrapper[4768]: E1124 17:51:44.355841 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:44.855797915 +0000 UTC m=+143.716379692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.434256 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fjj99" podStartSLOduration=123.434239648 podStartE2EDuration="2m3.434239648s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:44.433784665 +0000 UTC m=+143.294366442" watchObservedRunningTime="2025-11-24 17:51:44.434239648 +0000 UTC m=+143.294821425" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.450791 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xxlhx" podStartSLOduration=123.450769764 podStartE2EDuration="2m3.450769764s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:44.383754376 +0000 UTC m=+143.244336163" watchObservedRunningTime="2025-11-24 17:51:44.450769764 +0000 UTC m=+143.311351541" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.457297 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:44 crc kubenswrapper[4768]: E1124 17:51:44.457724 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:44.957709893 +0000 UTC m=+143.818291670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.506976 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fgt8t" podStartSLOduration=123.506957303 podStartE2EDuration="2m3.506957303s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:44.478019176 +0000 UTC m=+143.338600953" watchObservedRunningTime="2025-11-24 17:51:44.506957303 +0000 UTC m=+143.367539080" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.507815 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" podStartSLOduration=123.507808055 podStartE2EDuration="2m3.507808055s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:44.503848722 +0000 UTC m=+143.364430509" watchObservedRunningTime="2025-11-24 17:51:44.507808055 +0000 UTC m=+143.368389832" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.558791 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:44 crc kubenswrapper[4768]: E1124 17:51:44.559111 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:45.059077766 +0000 UTC m=+143.919659543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.559372 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:44 crc kubenswrapper[4768]: E1124 17:51:44.559739 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:45.059731983 +0000 UTC m=+143.920313760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.660048 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:44 crc kubenswrapper[4768]: E1124 17:51:44.665744 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:45.165713686 +0000 UTC m=+144.026295483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.668638 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:44 crc kubenswrapper[4768]: E1124 17:51:44.669619 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:45.169604237 +0000 UTC m=+144.030186014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.708363 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zblz" podStartSLOduration=124.708346895 podStartE2EDuration="2m4.708346895s" podCreationTimestamp="2025-11-24 17:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:44.665804709 +0000 UTC m=+143.526386486" watchObservedRunningTime="2025-11-24 17:51:44.708346895 +0000 UTC m=+143.568928682" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.713000 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-tj982" podStartSLOduration=123.712986655 podStartE2EDuration="2m3.712986655s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:44.706725814 +0000 UTC m=+143.567307611" watchObservedRunningTime="2025-11-24 17:51:44.712986655 +0000 UTC m=+143.573568452" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.760852 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hgjk" podStartSLOduration=124.760823879 podStartE2EDuration="2m4.760823879s" podCreationTimestamp="2025-11-24 17:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:44.749866276 +0000 UTC m=+143.610448073" watchObservedRunningTime="2025-11-24 17:51:44.760823879 +0000 UTC m=+143.621405656" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.771706 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:44 crc kubenswrapper[4768]: E1124 17:51:44.772157 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:45.27213898 +0000 UTC m=+144.132720767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.836001 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-lbcxh" podStartSLOduration=123.835983787 podStartE2EDuration="2m3.835983787s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:44.798712696 +0000 UTC m=+143.659294473" watchObservedRunningTime="2025-11-24 17:51:44.835983787 +0000 UTC m=+143.696565564" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.871363 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-v8v5f" podStartSLOduration=124.871345139 podStartE2EDuration="2m4.871345139s" podCreationTimestamp="2025-11-24 17:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:44.83571398 +0000 UTC m=+143.696295757" watchObservedRunningTime="2025-11-24 17:51:44.871345139 +0000 UTC m=+143.731926916" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.873009 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:44 crc kubenswrapper[4768]: E1124 17:51:44.873425 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:45.373410241 +0000 UTC m=+144.233992018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.920700 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" podStartSLOduration=123.92067814 podStartE2EDuration="2m3.92067814s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:44.871173384 +0000 UTC m=+143.731755151" watchObservedRunningTime="2025-11-24 17:51:44.92067814 +0000 UTC m=+143.781259917" Nov 24 17:51:44 crc kubenswrapper[4768]: I1124 17:51:44.974738 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:44 crc kubenswrapper[4768]: E1124 17:51:44.975684 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:45.475649568 +0000 UTC m=+144.336231355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.076380 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:45 crc kubenswrapper[4768]: E1124 17:51:45.076917 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:45.576900548 +0000 UTC m=+144.437482325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.177253 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:45 crc kubenswrapper[4768]: E1124 17:51:45.177537 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:45.677522683 +0000 UTC m=+144.538104460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.232243 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" event={"ID":"3288b5aa-f73a-49d8-8714-40cfc23c34c2","Type":"ContainerStarted","Data":"8b16fe694a2443d26a0f50a79839e73a48eda852244e3288bd488bcebf8923a8"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.232313 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" event={"ID":"3288b5aa-f73a-49d8-8714-40cfc23c34c2","Type":"ContainerStarted","Data":"752d483882c2c9a408dd224060af296dd0f65596a9cc5e314c2a3baf91eee66b"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.280322 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" event={"ID":"4b6167fc-ef32-4514-aa36-75ac504c9393","Type":"ContainerStarted","Data":"8cba321c2f44c195bc37e31911c4ea9bfd57d60ecfa0e51a4c252ad415a1ef3e"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.297163 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:45 crc kubenswrapper[4768]: E1124 17:51:45.298921 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:45.798904512 +0000 UTC m=+144.659486289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.318331 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" event={"ID":"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b","Type":"ContainerStarted","Data":"665d22d56488cb3101832f0b65ab3a83f58ddd283b03b8738c5430b85c610cb2"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.318652 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.325437 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" event={"ID":"41284f0b-a93d-49a3-bfbc-1f0aeae13cdc","Type":"ContainerStarted","Data":"5da0afc92cecc816224f20227f3955e273056d79add76ccc74d211154e10cacf"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.328156 4768 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-745nn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.328224 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" podUID="ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.367240 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hwbdt" event={"ID":"6d651da5-cbbb-452b-b2ac-bd2f8ce1d4f2","Type":"ContainerStarted","Data":"beaadf027549847c0ca801885c155e219276413842b4991eed4d9a69a69ac0b4"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.378139 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" event={"ID":"0ac86e85-7038-49ec-977e-e27bad8a5d26","Type":"ContainerStarted","Data":"4dec444917cb9ebe97a5d26ac9df6ebf5ad144edec9a9d9c14cd3a1da0f668f9"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.378186 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" event={"ID":"0ac86e85-7038-49ec-977e-e27bad8a5d26","Type":"ContainerStarted","Data":"bfad349b1e472ec1a1634516469662441bfa8d8b5344a118199afd9f882e8f97"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.380881 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" event={"ID":"e322e474-b6fd-43ec-a7f4-8680a5b02172","Type":"ContainerStarted","Data":"898f673bdd6f9f924ea210eb2796a83f54788dfc38ce260076bb7115acee8798"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.381434 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.386951 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gx45l" event={"ID":"d08bd8b5-0113-45a9-b115-205e452b1481","Type":"ContainerStarted","Data":"3ed165bf34ce055cd1434f231e5cb28394bca3c39a86b30540a17a0cd1f3ec53"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.387002 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gx45l" event={"ID":"d08bd8b5-0113-45a9-b115-205e452b1481","Type":"ContainerStarted","Data":"1cbc9f586f65069775b1edfaeb01a01fcb0a1b4bea264815078b57699c29da57"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.398618 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:45 crc kubenswrapper[4768]: E1124 17:51:45.400249 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:45.900229465 +0000 UTC m=+144.760811242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.404137 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fph7m" event={"ID":"ae2d2ccc-2d97-4992-a879-286f628bb1b0","Type":"ContainerStarted","Data":"7bf64586f01d8b57726b5c7b7589facb499cc0da7212448f933f12809484e9fb"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.413057 4768 generic.go:334] "Generic (PLEG): container finished" podID="745e1125-670f-4e6e-acf0-e1206cf06a8e" containerID="045a18dd01385acf2132f1d5f8edf26e56903224aa2f3ad573f121d92a6a00a6" exitCode=0 Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.413129 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" event={"ID":"745e1125-670f-4e6e-acf0-e1206cf06a8e","Type":"ContainerDied","Data":"045a18dd01385acf2132f1d5f8edf26e56903224aa2f3ad573f121d92a6a00a6"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.417237 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" event={"ID":"798036f8-c88a-4293-85f7-59946faf2a71","Type":"ContainerStarted","Data":"4c8c8b452ce5fde0f2447a0dbebddcee61c693dae4dc1f129f917c65717ff7dd"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.418242 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.422874 4768 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nd7rd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.422926 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" podUID="798036f8-c88a-4293-85f7-59946faf2a71" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.426077 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" event={"ID":"d939f2bb-c256-40c3-96de-f3cf0d53c3b0","Type":"ContainerStarted","Data":"643329d684e0236f82ddd92fae68add37bf4b7ebc80bd7b1c778fd041cfdd9c9"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.428245 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" event={"ID":"7e752bf7-ed78-42c7-a76b-dcd9ca447ab5","Type":"ContainerStarted","Data":"d24ea8f7d78d74cf86900c71f8488f7ee71889c1db645ae0b53bec7bed0ec831"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.445675 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" event={"ID":"5b91c837-cd56-4b9a-b69e-7bc008877eb9","Type":"ContainerStarted","Data":"f4a6e14fca082bf5ed1eefdff19280bcb17aef22fea6e58a773aeb0ee68d5b16"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.459657 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" event={"ID":"43eb5e3a-3bc8-4437-a94c-e327666e2db3","Type":"ContainerStarted","Data":"c70881d167ba1b1a4272c672cb2447231019604e3fa8434ee5b10b3aa9a466b0"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.459704 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" event={"ID":"43eb5e3a-3bc8-4437-a94c-e327666e2db3","Type":"ContainerStarted","Data":"2b9e264004799c99c26241ed6a57d3af8a6600098b27bf6b5142020279abcac7"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.460609 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.461979 4768 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n8bw2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.462041 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" podUID="43eb5e3a-3bc8-4437-a94c-e327666e2db3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.464435 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" event={"ID":"1dbfc132-bce0-4046-90a6-7cdac7abfe8c","Type":"ContainerStarted","Data":"337849b2a4edd014946ca5a6276e34b1a650aa1234fdbd81b760b245243ceae8"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.464485 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" event={"ID":"1dbfc132-bce0-4046-90a6-7cdac7abfe8c","Type":"ContainerStarted","Data":"fdb50ecef4cec4c0837ee0e88057f48150d2c15a21cf0a995dd25bc252341716"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.465049 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.466605 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j6bxp" event={"ID":"3f7d3e72-29f7-417e-9b42-b13c93e56f46","Type":"ContainerStarted","Data":"46c4d48120538264ef8473392572f733cfc9f2928c4fb070b299db693b26c48d"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.466633 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j6bxp" event={"ID":"3f7d3e72-29f7-417e-9b42-b13c93e56f46","Type":"ContainerStarted","Data":"abcece11edf4bd420ff27f06c2ab2d9c249b83734561a33f9a906b42bae78ce8"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.483740 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" event={"ID":"9b8d6985-79fe-4be9-a7e3-5c762214d678","Type":"ContainerStarted","Data":"cfb967f735aa5e15d268e8ab56d2e2adb067db43ca98f86627c8c57c958d0835"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.485004 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.488503 4768 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6zm9x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.488537 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" podUID="9b8d6985-79fe-4be9-a7e3-5c762214d678" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.488978 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qk7vd" event={"ID":"a4adc93d-6aca-4166-a87d-1c8c13f72293","Type":"ContainerStarted","Data":"8ebbc2230bf4deaaa3d123f993346e7949341f5257981bb7e9a96b6f4ae0af1a"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.489004 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qk7vd" event={"ID":"a4adc93d-6aca-4166-a87d-1c8c13f72293","Type":"ContainerStarted","Data":"2d13c7217dff77bdadc96834e46eb92b00b6003a556b51230604d951c3426acb"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.490442 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" event={"ID":"812c8c26-80fa-4bc3-892c-d101746601c0","Type":"ContainerStarted","Data":"f615bc4df7b194ab1ff63ff89de5599f382b93ecd3633992a207906f845dd4aa"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.491620 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6stph" event={"ID":"25bd0ae5-28d7-4466-bfb1-e22d25dbc966","Type":"ContainerStarted","Data":"9fb498fc50b237d783f96a8b505afd470ff3635c597ee3d48d5a59a3812d54d0"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.491644 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6stph" event={"ID":"25bd0ae5-28d7-4466-bfb1-e22d25dbc966","Type":"ContainerStarted","Data":"b61d4d1276f56f8bda121eed2568ce920cbff18c6f0e0fc8d1ec777111ce11d0"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.493076 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" event={"ID":"b824dba7-d50a-4972-ba6f-49ee0fb30604","Type":"ContainerStarted","Data":"7d0d31770b074427d01065c4f9c8c516cc2dd52adaa5af03c58fc78b329a97c8"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.493097 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" event={"ID":"b824dba7-d50a-4972-ba6f-49ee0fb30604","Type":"ContainerStarted","Data":"24d8f09cebb3b30911216b3010a67b682e959371e74d288f82f3627229f8398a"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.495833 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" event={"ID":"b9a660e1-b6fc-40e7-a6d9-587f312ea140","Type":"ContainerStarted","Data":"0b87bf5164e6027ce34a6dae7f03ba1d5429703d61e72cb65d6a819a8d2ac6f9"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.500098 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:45 crc kubenswrapper[4768]: E1124 17:51:45.503779 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.003740243 +0000 UTC m=+144.864322030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.552252 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" event={"ID":"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9","Type":"ContainerStarted","Data":"83d65c0400c8b08d8f215b59a5531491dfd6fe38c45923396d38d2dda6a1518d"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.552294 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" event={"ID":"3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9","Type":"ContainerStarted","Data":"c722b0127541c416a7b6cf1645c238f0713c7a3fefcc42753f6ec84b41a8223d"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.553114 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.556840 4768 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-krvz2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.556879 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" podUID="3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.577179 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-gx45l" podStartSLOduration=124.577160308 podStartE2EDuration="2m4.577160308s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.530214097 +0000 UTC m=+144.390795874" watchObservedRunningTime="2025-11-24 17:51:45.577160308 +0000 UTC m=+144.437742085" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.578067 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" podStartSLOduration=124.57806106 podStartE2EDuration="2m4.57806106s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.576341136 +0000 UTC m=+144.436922913" watchObservedRunningTime="2025-11-24 17:51:45.57806106 +0000 UTC m=+144.438642837" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.587999 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4" event={"ID":"622d16ca-1d8c-49e7-8ad7-c7b33b9003f2","Type":"ContainerStarted","Data":"2b2ad69977f9ee370779b92705667f67b1c9b6ea43f03cc7e5a5356b3c8ff372"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.591317 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" event={"ID":"27aa0431-2b4f-40d0-98e5-38c1d4e2a0bc","Type":"ContainerStarted","Data":"d38656b09288cc408aec65dd79f1b7563a62ffdc5414946726d86e992eac6377"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.593258 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" event={"ID":"9439073f-3757-4e0b-959d-fd0c1294ad75","Type":"ContainerStarted","Data":"74a22ad0c15c447876efdaddcefc5ac77a872f3597a237eca4924416d67bc6df"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.595091 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8hvbs" event={"ID":"b915353f-fcb8-4d2c-841f-a2091f2c7d96","Type":"ContainerStarted","Data":"a5bdb125acb501afb96ab1c602a20c2ad8cfc45fae2b8a971c8e7a5298f7ed7e"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.596809 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-96ff4" event={"ID":"c39e586f-224c-4428-9114-1accf92dc1d4","Type":"ContainerStarted","Data":"2cb09aa51f04cde1cb1d9a35fe4426e99d5a98fc1d5d830572a40feead43c2e3"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.601550 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w74jp" podStartSLOduration=124.601536715 podStartE2EDuration="2m4.601536715s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.599689768 +0000 UTC m=+144.460271545" watchObservedRunningTime="2025-11-24 17:51:45.601536715 +0000 UTC m=+144.462118492" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.602952 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:45 crc kubenswrapper[4768]: E1124 17:51:45.603829 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.103817224 +0000 UTC m=+144.964399001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.620743 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" event={"ID":"f98cf38a-e904-4b11-bd9a-bb558bc603ae","Type":"ContainerStarted","Data":"0e59674eb6d379c881f20edd4b530de76bdd91eead3ed95766689a2ecd5df2cd"} Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.621726 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgt8t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.621768 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fgt8t" podUID="73cd8533-3450-46e3-89b9-6dd092750ef9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.634554 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" podStartSLOduration=124.634532476 podStartE2EDuration="2m4.634532476s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.63310721 +0000 UTC m=+144.493688987" watchObservedRunningTime="2025-11-24 17:51:45.634532476 +0000 UTC m=+144.495114253" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.675806 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nxw22" podStartSLOduration=124.67578485 podStartE2EDuration="2m4.67578485s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.666496761 +0000 UTC m=+144.527078538" watchObservedRunningTime="2025-11-24 17:51:45.67578485 +0000 UTC m=+144.536366627" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.697457 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9brwg" podStartSLOduration=124.697440879 podStartE2EDuration="2m4.697440879s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.695453407 +0000 UTC m=+144.556035184" watchObservedRunningTime="2025-11-24 17:51:45.697440879 +0000 UTC m=+144.558022656" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.704635 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:45 crc kubenswrapper[4768]: E1124 17:51:45.724509 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.224493656 +0000 UTC m=+145.085075433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.760246 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gv8zn" podStartSLOduration=124.760230587 podStartE2EDuration="2m4.760230587s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.758876832 +0000 UTC m=+144.619458609" watchObservedRunningTime="2025-11-24 17:51:45.760230587 +0000 UTC m=+144.620812364" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.774168 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" podStartSLOduration=124.774147276 podStartE2EDuration="2m4.774147276s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.733973251 +0000 UTC m=+144.594555028" watchObservedRunningTime="2025-11-24 17:51:45.774147276 +0000 UTC m=+144.634729053" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.808615 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:45 crc kubenswrapper[4768]: E1124 17:51:45.808978 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.308942593 +0000 UTC m=+145.169524370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.809502 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:45 crc kubenswrapper[4768]: E1124 17:51:45.810145 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.310111574 +0000 UTC m=+145.170693351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.818908 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6stph" podStartSLOduration=124.81888153 podStartE2EDuration="2m4.81888153s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.817448442 +0000 UTC m=+144.678030219" watchObservedRunningTime="2025-11-24 17:51:45.81888153 +0000 UTC m=+144.679463307" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.870980 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j6bxp" podStartSLOduration=124.870965953 podStartE2EDuration="2m4.870965953s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.869731371 +0000 UTC m=+144.730313148" watchObservedRunningTime="2025-11-24 17:51:45.870965953 +0000 UTC m=+144.731547720" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.871896 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-hwbdt" podStartSLOduration=6.871890617 podStartE2EDuration="6.871890617s" podCreationTimestamp="2025-11-24 17:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.845591178 +0000 UTC m=+144.706172955" watchObservedRunningTime="2025-11-24 17:51:45.871890617 +0000 UTC m=+144.732472394" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.900968 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2blh5" podStartSLOduration=124.900947966 podStartE2EDuration="2m4.900947966s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.900149135 +0000 UTC m=+144.760730922" watchObservedRunningTime="2025-11-24 17:51:45.900947966 +0000 UTC m=+144.761529743" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.918695 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:45 crc kubenswrapper[4768]: E1124 17:51:45.919150 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.419133555 +0000 UTC m=+145.279715332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.937095 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7knl5" podStartSLOduration=124.937073447 podStartE2EDuration="2m4.937073447s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.935169988 +0000 UTC m=+144.795751765" watchObservedRunningTime="2025-11-24 17:51:45.937073447 +0000 UTC m=+144.797655224" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.969778 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qk7vd" podStartSLOduration=6.96975473 podStartE2EDuration="6.96975473s" podCreationTimestamp="2025-11-24 17:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.967215725 +0000 UTC m=+144.827797502" watchObservedRunningTime="2025-11-24 17:51:45.96975473 +0000 UTC m=+144.830336507" Nov 24 17:51:45 crc kubenswrapper[4768]: I1124 17:51:45.990899 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" podStartSLOduration=124.990884375 podStartE2EDuration="2m4.990884375s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:45.9891372 +0000 UTC m=+144.849718977" watchObservedRunningTime="2025-11-24 17:51:45.990884375 +0000 UTC m=+144.851466162" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.024265 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.024614 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.524602174 +0000 UTC m=+145.385183941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.125328 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.125810 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.625789923 +0000 UTC m=+145.486371700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.152775 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" podStartSLOduration=125.152755708 podStartE2EDuration="2m5.152755708s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.146895877 +0000 UTC m=+145.007477654" watchObservedRunningTime="2025-11-24 17:51:46.152755708 +0000 UTC m=+145.013337485" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.153339 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" podStartSLOduration=125.153332673 podStartE2EDuration="2m5.153332673s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.11676398 +0000 UTC m=+144.977345757" watchObservedRunningTime="2025-11-24 17:51:46.153332673 +0000 UTC m=+145.013914450" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.179639 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6hp8w" podStartSLOduration=125.179616241 podStartE2EDuration="2m5.179616241s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.175947956 +0000 UTC m=+145.036529733" watchObservedRunningTime="2025-11-24 17:51:46.179616241 +0000 UTC m=+145.040198018" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.233137 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.233477 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.73346555 +0000 UTC m=+145.594047327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.334199 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.334555 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.834527215 +0000 UTC m=+145.695109002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.334612 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.334952 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.834939926 +0000 UTC m=+145.695521703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.357571 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" podStartSLOduration=125.357552599 podStartE2EDuration="2m5.357552599s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.272950238 +0000 UTC m=+145.133532005" watchObservedRunningTime="2025-11-24 17:51:46.357552599 +0000 UTC m=+145.218134376" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.425622 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6cqxg" podStartSLOduration=125.425599953 podStartE2EDuration="2m5.425599953s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.358221656 +0000 UTC m=+145.218803443" watchObservedRunningTime="2025-11-24 17:51:46.425599953 +0000 UTC m=+145.286181730" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.427404 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" podStartSLOduration=125.42738808 podStartE2EDuration="2m5.42738808s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.426033885 +0000 UTC m=+145.286615662" watchObservedRunningTime="2025-11-24 17:51:46.42738808 +0000 UTC m=+145.287969857" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.435205 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.435310 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.935291953 +0000 UTC m=+145.795873730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.435580 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.435868 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:46.935858908 +0000 UTC m=+145.796440685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.469800 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rrct4" podStartSLOduration=125.469784353 podStartE2EDuration="2m5.469784353s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.468568312 +0000 UTC m=+145.329150099" watchObservedRunningTime="2025-11-24 17:51:46.469784353 +0000 UTC m=+145.330366130" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.530103 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.537226 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.537409 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.037378025 +0000 UTC m=+145.897959802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.537684 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.538044 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.038031683 +0000 UTC m=+145.898613460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.539763 4768 patch_prober.go:28] interesting pod/router-default-5444994796-8hvbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 17:51:46 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 17:51:46 crc kubenswrapper[4768]: [+]process-running ok Nov 24 17:51:46 crc kubenswrapper[4768]: healthz check failed Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.539813 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8hvbs" podUID="b915353f-fcb8-4d2c-841f-a2091f2c7d96" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.595299 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-8hvbs" podStartSLOduration=125.595280119 podStartE2EDuration="2m5.595280119s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.579062 +0000 UTC m=+145.439643777" watchObservedRunningTime="2025-11-24 17:51:46.595280119 +0000 UTC m=+145.455861896" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.595840 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-96ff4" podStartSLOduration=125.595836253 podStartE2EDuration="2m5.595836253s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.594082878 +0000 UTC m=+145.454664665" watchObservedRunningTime="2025-11-24 17:51:46.595836253 +0000 UTC m=+145.456418030" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.629256 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" event={"ID":"399d5dbd-8565-4557-b593-f7c1ca2abcf5","Type":"ContainerStarted","Data":"910eaa55a0cf55d809f6ed9529df3e8e9f9c1a80807e77cf6fe47408e0bb3b02"} Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.631008 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-96ff4" event={"ID":"c39e586f-224c-4428-9114-1accf92dc1d4","Type":"ContainerStarted","Data":"a5a0174a942c5d9f361cc7088f64400e884f87f55beb7a02aba8aa3a0db9f4b3"} Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.634382 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" event={"ID":"0ac86e85-7038-49ec-977e-e27bad8a5d26","Type":"ContainerStarted","Data":"88a9e38cecad33ffd6c3548d6505321b4c887432c761bebcf119a7417e51b715"} Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.640556 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.640728 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.140691109 +0000 UTC m=+146.001272886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.641155 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.641737 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.141715136 +0000 UTC m=+146.002296973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.642517 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lvpkq" podStartSLOduration=125.642503786 podStartE2EDuration="2m5.642503786s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.639970551 +0000 UTC m=+145.500552328" watchObservedRunningTime="2025-11-24 17:51:46.642503786 +0000 UTC m=+145.503085563" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.642625 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fph7m" event={"ID":"ae2d2ccc-2d97-4992-a879-286f628bb1b0","Type":"ContainerStarted","Data":"21052348b0a9ff4fbc52d630a3f668dc8beebf802be15da32302e2b15d278580"} Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.642665 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fph7m" event={"ID":"ae2d2ccc-2d97-4992-a879-286f628bb1b0","Type":"ContainerStarted","Data":"34e1a4a0099b58460074719ea465c96b7b4dd5afc56b7d7a6f0b06ca70f38a09"} Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.643176 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-fph7m" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.650385 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" event={"ID":"1dbfc132-bce0-4046-90a6-7cdac7abfe8c","Type":"ContainerStarted","Data":"64aee00455893db8f2822bac07d6df1d9f928107c85f91f206b0abb481c51734"} Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.663683 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" event={"ID":"745e1125-670f-4e6e-acf0-e1206cf06a8e","Type":"ContainerStarted","Data":"93ff50e70cfe0fe0fe7e7ce1f11f1643b7c620aff3d84e490a91066e29d63d4a"} Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.663732 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" event={"ID":"745e1125-670f-4e6e-acf0-e1206cf06a8e","Type":"ContainerStarted","Data":"9d621ebde8a445b3a7cdfca82e5904e25656fb6c1b412025f61c1e823ddce8cc"} Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.665320 4768 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-krvz2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.665370 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" podUID="3c9a2467-2f2d-4c60-98f9-8f46a61fdcc9" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.665601 4768 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n8bw2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.665636 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" podUID="43eb5e3a-3bc8-4437-a94c-e327666e2db3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.665848 4768 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6zm9x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.665871 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" podUID="9b8d6985-79fe-4be9-a7e3-5c762214d678" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.673869 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nd7rd" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.711144 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6fdjn" podStartSLOduration=125.711123465 podStartE2EDuration="2m5.711123465s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.696353665 +0000 UTC m=+145.556935442" watchObservedRunningTime="2025-11-24 17:51:46.711123465 +0000 UTC m=+145.571705242" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.742814 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.744721 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.24466959 +0000 UTC m=+146.105251367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.772592 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.772956 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.775469 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fph7m" podStartSLOduration=7.7754605340000005 podStartE2EDuration="7.775460534s" podCreationTimestamp="2025-11-24 17:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.773664878 +0000 UTC m=+145.634246655" watchObservedRunningTime="2025-11-24 17:51:46.775460534 +0000 UTC m=+145.636042311" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.800287 4768 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2kv5d container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.800592 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" podUID="399d5dbd-8565-4557-b593-f7c1ca2abcf5" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.847513 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.856398 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.356378281 +0000 UTC m=+146.216960058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.877476 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" podStartSLOduration=125.877460205 podStartE2EDuration="2m5.877460205s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.877300271 +0000 UTC m=+145.737882048" watchObservedRunningTime="2025-11-24 17:51:46.877460205 +0000 UTC m=+145.738041982" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.877613 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" podStartSLOduration=125.877609739 podStartE2EDuration="2m5.877609739s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.823965055 +0000 UTC m=+145.684546852" watchObservedRunningTime="2025-11-24 17:51:46.877609739 +0000 UTC m=+145.738191516" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.950074 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:46 crc kubenswrapper[4768]: E1124 17:51:46.950387 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.450371735 +0000 UTC m=+146.310953512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.973172 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.973230 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.973286 4768 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5sdcl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 24 17:51:46 crc kubenswrapper[4768]: I1124 17:51:46.973313 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" podUID="745e1125-670f-4e6e-acf0-e1206cf06a8e" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.056020 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:47 crc kubenswrapper[4768]: E1124 17:51:47.056391 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.556380318 +0000 UTC m=+146.416962095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.157236 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:47 crc kubenswrapper[4768]: E1124 17:51:47.157816 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.657799463 +0000 UTC m=+146.518381240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.259319 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:47 crc kubenswrapper[4768]: E1124 17:51:47.259867 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.759847854 +0000 UTC m=+146.620429721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.361623 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:47 crc kubenswrapper[4768]: E1124 17:51:47.361842 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.861810593 +0000 UTC m=+146.722392390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.362144 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:47 crc kubenswrapper[4768]: E1124 17:51:47.362450 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.862436129 +0000 UTC m=+146.723017906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.463683 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:47 crc kubenswrapper[4768]: E1124 17:51:47.463872 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.963841034 +0000 UTC m=+146.824422811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.464071 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:47 crc kubenswrapper[4768]: E1124 17:51:47.464396 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:47.964372267 +0000 UTC m=+146.824954044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.533544 4768 patch_prober.go:28] interesting pod/router-default-5444994796-8hvbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 17:51:47 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 17:51:47 crc kubenswrapper[4768]: [+]process-running ok Nov 24 17:51:47 crc kubenswrapper[4768]: healthz check failed Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.533895 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8hvbs" podUID="b915353f-fcb8-4d2c-841f-a2091f2c7d96" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.565032 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:47 crc kubenswrapper[4768]: E1124 17:51:47.565453 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.065421563 +0000 UTC m=+146.926003340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.572254 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.620922 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ql6wq" podStartSLOduration=126.620907163 podStartE2EDuration="2m6.620907163s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:46.956533083 +0000 UTC m=+145.817114870" watchObservedRunningTime="2025-11-24 17:51:47.620907163 +0000 UTC m=+146.481488940" Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.666851 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:47 crc kubenswrapper[4768]: E1124 17:51:47.667239 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.167220478 +0000 UTC m=+147.027802255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.670740 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" event={"ID":"b9a660e1-b6fc-40e7-a6d9-587f312ea140","Type":"ContainerStarted","Data":"448ee997adc3d5c9b8bbc124c2bebba5dbd21e8195f52f37eb0616c9431c5d3e"} Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.671753 4768 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6zm9x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.671814 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" podUID="9b8d6985-79fe-4be9-a7e3-5c762214d678" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.724833 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n8bw2" Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.769005 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:47 crc kubenswrapper[4768]: E1124 17:51:47.770309 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.270283865 +0000 UTC m=+147.130865642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.871354 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:47 crc kubenswrapper[4768]: E1124 17:51:47.871952 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.371937146 +0000 UTC m=+147.232518913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:47 crc kubenswrapper[4768]: I1124 17:51:47.985598 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:47 crc kubenswrapper[4768]: E1124 17:51:47.985974 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.485960026 +0000 UTC m=+147.346541803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.087283 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.087983 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.587971166 +0000 UTC m=+147.448552943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.188772 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.189173 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.689155555 +0000 UTC m=+147.549737332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.289370 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9tpf2" Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.290365 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.290711 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.790698203 +0000 UTC m=+147.651279980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.392034 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.392450 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.892426327 +0000 UTC m=+147.753008104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.392811 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.393115 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.893103564 +0000 UTC m=+147.753685351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.494012 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.494377 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:48.994351124 +0000 UTC m=+147.854932901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.537194 4768 patch_prober.go:28] interesting pod/router-default-5444994796-8hvbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 17:51:48 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 17:51:48 crc kubenswrapper[4768]: [+]process-running ok Nov 24 17:51:48 crc kubenswrapper[4768]: healthz check failed Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.537252 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8hvbs" podUID="b915353f-fcb8-4d2c-841f-a2091f2c7d96" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.595914 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.596298 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.096283062 +0000 UTC m=+147.956864839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.607717 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krvz2" Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.682941 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" event={"ID":"b9a660e1-b6fc-40e7-a6d9-587f312ea140","Type":"ContainerStarted","Data":"4a1c5db160c13a6e3f85ab816a77b9d82b83328b67513b5f47f43294562b1518"} Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.683003 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" event={"ID":"b9a660e1-b6fc-40e7-a6d9-587f312ea140","Type":"ContainerStarted","Data":"73d3c151b4fd4e1cfc967bd79faa026f58b998571b967d22538eb58b35bc19a2"} Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.684660 4768 generic.go:334] "Generic (PLEG): container finished" podID="b824dba7-d50a-4972-ba6f-49ee0fb30604" containerID="7d0d31770b074427d01065c4f9c8c516cc2dd52adaa5af03c58fc78b329a97c8" exitCode=0 Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.684808 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" event={"ID":"b824dba7-d50a-4972-ba6f-49ee0fb30604","Type":"ContainerDied","Data":"7d0d31770b074427d01065c4f9c8c516cc2dd52adaa5af03c58fc78b329a97c8"} Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.696518 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.696815 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.196786514 +0000 UTC m=+148.057368291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.697189 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.697557 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.197547394 +0000 UTC m=+148.058129171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.799115 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.799312 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.299286767 +0000 UTC m=+148.159868544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.801944 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.802624 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.302607842 +0000 UTC m=+148.163189619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.811139 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vzqw9"] Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.812112 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.819129 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.831331 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzqw9"] Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.911088 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.912411 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.412388223 +0000 UTC m=+148.272970010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.912867 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-utilities\") pod \"certified-operators-vzqw9\" (UID: \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\") " pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.913183 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.913313 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-catalog-content\") pod \"certified-operators-vzqw9\" (UID: \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\") " pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:51:48 crc kubenswrapper[4768]: I1124 17:51:48.913469 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgcqw\" (UniqueName: \"kubernetes.io/projected/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-kube-api-access-jgcqw\") pod \"certified-operators-vzqw9\" (UID: \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\") " pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:51:48 crc kubenswrapper[4768]: E1124 17:51:48.913784 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.413767949 +0000 UTC m=+148.274349716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.006522 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x8424"] Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.007874 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8424" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.013455 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.014021 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.014086 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.514069564 +0000 UTC m=+148.374651341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.014938 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.015440 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.515417989 +0000 UTC m=+148.375999766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.015773 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.015895 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-catalog-content\") pod \"certified-operators-vzqw9\" (UID: \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\") " pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.016001 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.016143 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgcqw\" (UniqueName: \"kubernetes.io/projected/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-kube-api-access-jgcqw\") pod \"certified-operators-vzqw9\" (UID: \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\") " pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.016294 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-utilities\") pod \"certified-operators-vzqw9\" (UID: \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\") " pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.016411 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.022726 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.017978 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-utilities\") pod \"certified-operators-vzqw9\" (UID: \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\") " pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.022067 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x8424"] Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.022208 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.016764 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-catalog-content\") pod \"certified-operators-vzqw9\" (UID: \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\") " pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.024338 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.029502 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.032052 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.055876 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgcqw\" (UniqueName: \"kubernetes.io/projected/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-kube-api-access-jgcqw\") pod \"certified-operators-vzqw9\" (UID: \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\") " pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.124274 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.124461 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.62443853 +0000 UTC m=+148.485020307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.124539 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a22825e1-d87e-48cf-b169-7d1360923af4-utilities\") pod \"community-operators-x8424\" (UID: \"a22825e1-d87e-48cf-b169-7d1360923af4\") " pod="openshift-marketplace/community-operators-x8424" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.124602 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.124639 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a22825e1-d87e-48cf-b169-7d1360923af4-catalog-content\") pod \"community-operators-x8424\" (UID: \"a22825e1-d87e-48cf-b169-7d1360923af4\") " pod="openshift-marketplace/community-operators-x8424" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.124662 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5z6d\" (UniqueName: \"kubernetes.io/projected/a22825e1-d87e-48cf-b169-7d1360923af4-kube-api-access-j5z6d\") pod \"community-operators-x8424\" (UID: \"a22825e1-d87e-48cf-b169-7d1360923af4\") " pod="openshift-marketplace/community-operators-x8424" Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.124930 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.624920003 +0000 UTC m=+148.485501780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.127109 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.204698 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2k6bn"] Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.205592 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.217566 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.221734 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2k6bn"] Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.225595 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.225780 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.725751053 +0000 UTC m=+148.586332830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.225814 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-utilities\") pod \"certified-operators-2k6bn\" (UID: \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\") " pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.225867 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a22825e1-d87e-48cf-b169-7d1360923af4-utilities\") pod \"community-operators-x8424\" (UID: \"a22825e1-d87e-48cf-b169-7d1360923af4\") " pod="openshift-marketplace/community-operators-x8424" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.225898 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2p4r\" (UniqueName: \"kubernetes.io/projected/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-kube-api-access-k2p4r\") pod \"certified-operators-2k6bn\" (UID: \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\") " pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.226000 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.226045 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-catalog-content\") pod \"certified-operators-2k6bn\" (UID: \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\") " pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.226072 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a22825e1-d87e-48cf-b169-7d1360923af4-catalog-content\") pod \"community-operators-x8424\" (UID: \"a22825e1-d87e-48cf-b169-7d1360923af4\") " pod="openshift-marketplace/community-operators-x8424" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.226093 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5z6d\" (UniqueName: \"kubernetes.io/projected/a22825e1-d87e-48cf-b169-7d1360923af4-kube-api-access-j5z6d\") pod \"community-operators-x8424\" (UID: \"a22825e1-d87e-48cf-b169-7d1360923af4\") " pod="openshift-marketplace/community-operators-x8424" Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.226374 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.726361168 +0000 UTC m=+148.586942945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.226692 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a22825e1-d87e-48cf-b169-7d1360923af4-catalog-content\") pod \"community-operators-x8424\" (UID: \"a22825e1-d87e-48cf-b169-7d1360923af4\") " pod="openshift-marketplace/community-operators-x8424" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.226734 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a22825e1-d87e-48cf-b169-7d1360923af4-utilities\") pod \"community-operators-x8424\" (UID: \"a22825e1-d87e-48cf-b169-7d1360923af4\") " pod="openshift-marketplace/community-operators-x8424" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.237746 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.246172 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.255015 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5z6d\" (UniqueName: \"kubernetes.io/projected/a22825e1-d87e-48cf-b169-7d1360923af4-kube-api-access-j5z6d\") pod \"community-operators-x8424\" (UID: \"a22825e1-d87e-48cf-b169-7d1360923af4\") " pod="openshift-marketplace/community-operators-x8424" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.327382 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.327548 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.827517976 +0000 UTC m=+148.688099753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.327636 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2p4r\" (UniqueName: \"kubernetes.io/projected/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-kube-api-access-k2p4r\") pod \"certified-operators-2k6bn\" (UID: \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\") " pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.327690 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.327723 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-catalog-content\") pod \"certified-operators-2k6bn\" (UID: \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\") " pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.327766 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-utilities\") pod \"certified-operators-2k6bn\" (UID: \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\") " pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.328376 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-catalog-content\") pod \"certified-operators-2k6bn\" (UID: \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\") " pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.328666 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-utilities\") pod \"certified-operators-2k6bn\" (UID: \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\") " pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.328888 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.828875872 +0000 UTC m=+148.689457649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.364181 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2p4r\" (UniqueName: \"kubernetes.io/projected/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-kube-api-access-k2p4r\") pod \"certified-operators-2k6bn\" (UID: \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\") " pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.383700 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8424" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.401398 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gp2qx"] Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.405786 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.414292 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gp2qx"] Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.429098 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.429312 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fa06755-0386-4960-9adc-258106178fca-utilities\") pod \"community-operators-gp2qx\" (UID: \"5fa06755-0386-4960-9adc-258106178fca\") " pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.429364 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlf7n\" (UniqueName: \"kubernetes.io/projected/5fa06755-0386-4960-9adc-258106178fca-kube-api-access-vlf7n\") pod \"community-operators-gp2qx\" (UID: \"5fa06755-0386-4960-9adc-258106178fca\") " pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.429402 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fa06755-0386-4960-9adc-258106178fca-catalog-content\") pod \"community-operators-gp2qx\" (UID: \"5fa06755-0386-4960-9adc-258106178fca\") " pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.429530 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:49.929515707 +0000 UTC m=+148.790097484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.454128 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzqw9"] Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.523129 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.530226 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fa06755-0386-4960-9adc-258106178fca-catalog-content\") pod \"community-operators-gp2qx\" (UID: \"5fa06755-0386-4960-9adc-258106178fca\") " pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.530266 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.530298 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fa06755-0386-4960-9adc-258106178fca-utilities\") pod \"community-operators-gp2qx\" (UID: \"5fa06755-0386-4960-9adc-258106178fca\") " pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.530334 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlf7n\" (UniqueName: \"kubernetes.io/projected/5fa06755-0386-4960-9adc-258106178fca-kube-api-access-vlf7n\") pod \"community-operators-gp2qx\" (UID: \"5fa06755-0386-4960-9adc-258106178fca\") " pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.530995 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fa06755-0386-4960-9adc-258106178fca-catalog-content\") pod \"community-operators-gp2qx\" (UID: \"5fa06755-0386-4960-9adc-258106178fca\") " pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.531213 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:50.031202468 +0000 UTC m=+148.891784245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.531563 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fa06755-0386-4960-9adc-258106178fca-utilities\") pod \"community-operators-gp2qx\" (UID: \"5fa06755-0386-4960-9adc-258106178fca\") " pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.537057 4768 patch_prober.go:28] interesting pod/router-default-5444994796-8hvbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 17:51:49 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 17:51:49 crc kubenswrapper[4768]: [+]process-running ok Nov 24 17:51:49 crc kubenswrapper[4768]: healthz check failed Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.537100 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8hvbs" podUID="b915353f-fcb8-4d2c-841f-a2091f2c7d96" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.553348 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlf7n\" (UniqueName: \"kubernetes.io/projected/5fa06755-0386-4960-9adc-258106178fca-kube-api-access-vlf7n\") pod \"community-operators-gp2qx\" (UID: \"5fa06755-0386-4960-9adc-258106178fca\") " pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:51:49 crc kubenswrapper[4768]: W1124 17:51:49.613574 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-6a7cc3700a04feb97ae5b778dd627fb930b8e99b90bfbabf5197b7867ad2c9a5 WatchSource:0}: Error finding container 6a7cc3700a04feb97ae5b778dd627fb930b8e99b90bfbabf5197b7867ad2c9a5: Status 404 returned error can't find the container with id 6a7cc3700a04feb97ae5b778dd627fb930b8e99b90bfbabf5197b7867ad2c9a5 Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.631354 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.631563 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:50.131534916 +0000 UTC m=+148.992116693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.631637 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.631955 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:50.131942276 +0000 UTC m=+148.992524143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.697369 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" event={"ID":"b9a660e1-b6fc-40e7-a6d9-587f312ea140","Type":"ContainerStarted","Data":"4920b5dee4bbb6a1c8257dbc0ef24ccb9dc0a0c1313ff1ab927c42d6438cae00"} Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.704820 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzqw9" event={"ID":"17fb1883-b4da-4e64-b27a-fdf11ff21ac2","Type":"ContainerStarted","Data":"7c6b3dd40f795577fdbe4b0a0dcdc2c441cfc0fd184aa97e337f35d390204733"} Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.706407 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6a7cc3700a04feb97ae5b778dd627fb930b8e99b90bfbabf5197b7867ad2c9a5"} Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.734945 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.735205 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:50.235186838 +0000 UTC m=+149.095768615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.735266 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.735536 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:50.235528587 +0000 UTC m=+149.096110374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.740213 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" podStartSLOduration=10.740194647 podStartE2EDuration="10.740194647s" podCreationTimestamp="2025-11-24 17:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:49.740015233 +0000 UTC m=+148.600597020" watchObservedRunningTime="2025-11-24 17:51:49.740194647 +0000 UTC m=+148.600776424" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.745294 4768 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.748229 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.836019 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.836213 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:50.336186162 +0000 UTC m=+149.196767939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.836342 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.836668 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:50.336659344 +0000 UTC m=+149.197241221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.866557 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2k6bn"] Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.937033 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.937220 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:50.437178376 +0000 UTC m=+149.297760153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.937649 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:49 crc kubenswrapper[4768]: E1124 17:51:49.938265 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 17:51:50.438251903 +0000 UTC m=+149.298833680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zzvkd" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.941600 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x8424"] Nov 24 17:51:49 crc kubenswrapper[4768]: W1124 17:51:49.942171 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda22825e1_d87e_48cf_b169_7d1360923af4.slice/crio-126d0bb6f2856b6e90a98922a8bfde18bb83ae83aaf34175baf63dd2dd569d27 WatchSource:0}: Error finding container 126d0bb6f2856b6e90a98922a8bfde18bb83ae83aaf34175baf63dd2dd569d27: Status 404 returned error can't find the container with id 126d0bb6f2856b6e90a98922a8bfde18bb83ae83aaf34175baf63dd2dd569d27 Nov 24 17:51:49 crc kubenswrapper[4768]: I1124 17:51:49.993956 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.037837 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gp2qx"] Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.038219 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:50 crc kubenswrapper[4768]: E1124 17:51:50.038616 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 17:51:50.538601411 +0000 UTC m=+149.399183188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 17:51:50 crc kubenswrapper[4768]: W1124 17:51:50.058699 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fa06755_0386_4960_9adc_258106178fca.slice/crio-3337af008d629a63c3851d5733e8e5f5e408c656f09fe5a7246e39a4a2a167cc WatchSource:0}: Error finding container 3337af008d629a63c3851d5733e8e5f5e408c656f09fe5a7246e39a4a2a167cc: Status 404 returned error can't find the container with id 3337af008d629a63c3851d5733e8e5f5e408c656f09fe5a7246e39a4a2a167cc Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.068813 4768 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T17:51:49.74532968Z","Handler":null,"Name":""} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.074437 4768 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.074472 4768 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.139627 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b824dba7-d50a-4972-ba6f-49ee0fb30604-config-volume\") pod \"b824dba7-d50a-4972-ba6f-49ee0fb30604\" (UID: \"b824dba7-d50a-4972-ba6f-49ee0fb30604\") " Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.139902 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b824dba7-d50a-4972-ba6f-49ee0fb30604-secret-volume\") pod \"b824dba7-d50a-4972-ba6f-49ee0fb30604\" (UID: \"b824dba7-d50a-4972-ba6f-49ee0fb30604\") " Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.139954 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk7t6\" (UniqueName: \"kubernetes.io/projected/b824dba7-d50a-4972-ba6f-49ee0fb30604-kube-api-access-qk7t6\") pod \"b824dba7-d50a-4972-ba6f-49ee0fb30604\" (UID: \"b824dba7-d50a-4972-ba6f-49ee0fb30604\") " Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.140173 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.140320 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b824dba7-d50a-4972-ba6f-49ee0fb30604-config-volume" (OuterVolumeSpecName: "config-volume") pod "b824dba7-d50a-4972-ba6f-49ee0fb30604" (UID: "b824dba7-d50a-4972-ba6f-49ee0fb30604"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.144018 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b824dba7-d50a-4972-ba6f-49ee0fb30604-kube-api-access-qk7t6" (OuterVolumeSpecName: "kube-api-access-qk7t6") pod "b824dba7-d50a-4972-ba6f-49ee0fb30604" (UID: "b824dba7-d50a-4972-ba6f-49ee0fb30604"). InnerVolumeSpecName "kube-api-access-qk7t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.144062 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b824dba7-d50a-4972-ba6f-49ee0fb30604-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b824dba7-d50a-4972-ba6f-49ee0fb30604" (UID: "b824dba7-d50a-4972-ba6f-49ee0fb30604"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.148520 4768 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.148560 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.180176 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zzvkd\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.240939 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.241259 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b824dba7-d50a-4972-ba6f-49ee0fb30604-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.241276 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk7t6\" (UniqueName: \"kubernetes.io/projected/b824dba7-d50a-4972-ba6f-49ee0fb30604-kube-api-access-qk7t6\") on node \"crc\" DevicePath \"\"" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.241286 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b824dba7-d50a-4972-ba6f-49ee0fb30604-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.275393 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.393577 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.532534 4768 patch_prober.go:28] interesting pod/router-default-5444994796-8hvbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 17:51:50 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 17:51:50 crc kubenswrapper[4768]: [+]process-running ok Nov 24 17:51:50 crc kubenswrapper[4768]: healthz check failed Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.532813 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8hvbs" podUID="b915353f-fcb8-4d2c-841f-a2091f2c7d96" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.570058 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zzvkd"] Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.713299 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" event={"ID":"3bd473c0-17b2-4d7c-830a-99afe5266762","Type":"ContainerStarted","Data":"ff4385609ffbb76fe82855a6fd39b4877e1b18acb5003b4a51519cfb506ca5cb"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.716215 4768 generic.go:334] "Generic (PLEG): container finished" podID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" containerID="5ba56e8f90f48818c07dfbcb3fa837863db180a44ba620dd9a704b2cb4b08565" exitCode=0 Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.716555 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzqw9" event={"ID":"17fb1883-b4da-4e64-b27a-fdf11ff21ac2","Type":"ContainerDied","Data":"5ba56e8f90f48818c07dfbcb3fa837863db180a44ba620dd9a704b2cb4b08565"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.718155 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.720311 4768 generic.go:334] "Generic (PLEG): container finished" podID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" containerID="7a6952937b93482b7af0a9d0277f879948014ea7096503e22ecbca06245ce51b" exitCode=0 Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.720370 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k6bn" event={"ID":"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9","Type":"ContainerDied","Data":"7a6952937b93482b7af0a9d0277f879948014ea7096503e22ecbca06245ce51b"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.720394 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k6bn" event={"ID":"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9","Type":"ContainerStarted","Data":"d44cc620b6433feca96ebce38ec5f1cbef1a46f1a654b0422d90a4032e64ae1a"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.724020 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f4fc79429d42bc00654e8e3e014422fac25018daf04c5151b21a4c129e7dbbd7"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.727943 4768 generic.go:334] "Generic (PLEG): container finished" podID="5fa06755-0386-4960-9adc-258106178fca" containerID="f60fa97ff1ecc3e1179455a07cd04ce395bfa652496378fa521e64225f4a9c90" exitCode=0 Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.728030 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gp2qx" event={"ID":"5fa06755-0386-4960-9adc-258106178fca","Type":"ContainerDied","Data":"f60fa97ff1ecc3e1179455a07cd04ce395bfa652496378fa521e64225f4a9c90"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.728048 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gp2qx" event={"ID":"5fa06755-0386-4960-9adc-258106178fca","Type":"ContainerStarted","Data":"3337af008d629a63c3851d5733e8e5f5e408c656f09fe5a7246e39a4a2a167cc"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.732948 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e5c7ff77ea0c20b905a69a97a62ff1af2e48c73df4a3897600490753902d47c6"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.733021 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3ce217a395f148b38d1c9d11cbb0b627453d0e5cc7846ccf0141fe4ff672b1c1"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.738126 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" event={"ID":"b824dba7-d50a-4972-ba6f-49ee0fb30604","Type":"ContainerDied","Data":"24d8f09cebb3b30911216b3010a67b682e959371e74d288f82f3627229f8398a"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.738191 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24d8f09cebb3b30911216b3010a67b682e959371e74d288f82f3627229f8398a" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.748287 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.760474 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6d4fde0ca7aa59ba4ca7b59d6da19e483fcdade69544b56af0291140408059b8"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.760551 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2cff2e8bf3d7e82d7b80606b73b4f2e412d31469b00169f437a5360d052482bf"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.761241 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.762421 4768 generic.go:334] "Generic (PLEG): container finished" podID="a22825e1-d87e-48cf-b169-7d1360923af4" containerID="bf170d45e1f1181d8373d47566b043b3f77f9d8e4582b89e013230f3224948e0" exitCode=0 Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.762608 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8424" event={"ID":"a22825e1-d87e-48cf-b169-7d1360923af4","Type":"ContainerDied","Data":"bf170d45e1f1181d8373d47566b043b3f77f9d8e4582b89e013230f3224948e0"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.762670 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8424" event={"ID":"a22825e1-d87e-48cf-b169-7d1360923af4","Type":"ContainerStarted","Data":"126d0bb6f2856b6e90a98922a8bfde18bb83ae83aaf34175baf63dd2dd569d27"} Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.811980 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tjmmt"] Nov 24 17:51:50 crc kubenswrapper[4768]: E1124 17:51:50.812692 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b824dba7-d50a-4972-ba6f-49ee0fb30604" containerName="collect-profiles" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.812710 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b824dba7-d50a-4972-ba6f-49ee0fb30604" containerName="collect-profiles" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.812854 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b824dba7-d50a-4972-ba6f-49ee0fb30604" containerName="collect-profiles" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.813819 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.815677 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.828506 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjmmt"] Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.859382 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.860107 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-utilities\") pod \"redhat-marketplace-tjmmt\" (UID: \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\") " pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.860140 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-catalog-content\") pod \"redhat-marketplace-tjmmt\" (UID: \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\") " pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.860156 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.860183 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8vbg\" (UniqueName: \"kubernetes.io/projected/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-kube-api-access-b8vbg\") pod \"redhat-marketplace-tjmmt\" (UID: \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\") " pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.861884 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.862305 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.862931 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.961026 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-utilities\") pod \"redhat-marketplace-tjmmt\" (UID: \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\") " pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.961085 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-catalog-content\") pod \"redhat-marketplace-tjmmt\" (UID: \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\") " pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.961142 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/105926c2-41b3-4a78-a8a4-cdf09ac261dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"105926c2-41b3-4a78-a8a4-cdf09ac261dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.961169 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8vbg\" (UniqueName: \"kubernetes.io/projected/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-kube-api-access-b8vbg\") pod \"redhat-marketplace-tjmmt\" (UID: \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\") " pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.961208 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/105926c2-41b3-4a78-a8a4-cdf09ac261dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"105926c2-41b3-4a78-a8a4-cdf09ac261dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.961739 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-utilities\") pod \"redhat-marketplace-tjmmt\" (UID: \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\") " pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.962030 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-catalog-content\") pod \"redhat-marketplace-tjmmt\" (UID: \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\") " pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:51:50 crc kubenswrapper[4768]: I1124 17:51:50.982777 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8vbg\" (UniqueName: \"kubernetes.io/projected/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-kube-api-access-b8vbg\") pod \"redhat-marketplace-tjmmt\" (UID: \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\") " pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.062787 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/105926c2-41b3-4a78-a8a4-cdf09ac261dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"105926c2-41b3-4a78-a8a4-cdf09ac261dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.062854 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/105926c2-41b3-4a78-a8a4-cdf09ac261dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"105926c2-41b3-4a78-a8a4-cdf09ac261dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.062934 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/105926c2-41b3-4a78-a8a4-cdf09ac261dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"105926c2-41b3-4a78-a8a4-cdf09ac261dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.078131 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/105926c2-41b3-4a78-a8a4-cdf09ac261dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"105926c2-41b3-4a78-a8a4-cdf09ac261dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.135222 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.179875 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.204907 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7zwr6"] Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.206310 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.226327 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zwr6"] Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.264951 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81701703-9dca-4d65-a4b5-47c74ead9c5f-catalog-content\") pod \"redhat-marketplace-7zwr6\" (UID: \"81701703-9dca-4d65-a4b5-47c74ead9c5f\") " pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.265209 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnh98\" (UniqueName: \"kubernetes.io/projected/81701703-9dca-4d65-a4b5-47c74ead9c5f-kube-api-access-lnh98\") pod \"redhat-marketplace-7zwr6\" (UID: \"81701703-9dca-4d65-a4b5-47c74ead9c5f\") " pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.265270 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81701703-9dca-4d65-a4b5-47c74ead9c5f-utilities\") pod \"redhat-marketplace-7zwr6\" (UID: \"81701703-9dca-4d65-a4b5-47c74ead9c5f\") " pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.366785 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81701703-9dca-4d65-a4b5-47c74ead9c5f-utilities\") pod \"redhat-marketplace-7zwr6\" (UID: \"81701703-9dca-4d65-a4b5-47c74ead9c5f\") " pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.366844 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81701703-9dca-4d65-a4b5-47c74ead9c5f-catalog-content\") pod \"redhat-marketplace-7zwr6\" (UID: \"81701703-9dca-4d65-a4b5-47c74ead9c5f\") " pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.366882 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnh98\" (UniqueName: \"kubernetes.io/projected/81701703-9dca-4d65-a4b5-47c74ead9c5f-kube-api-access-lnh98\") pod \"redhat-marketplace-7zwr6\" (UID: \"81701703-9dca-4d65-a4b5-47c74ead9c5f\") " pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.367818 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81701703-9dca-4d65-a4b5-47c74ead9c5f-utilities\") pod \"redhat-marketplace-7zwr6\" (UID: \"81701703-9dca-4d65-a4b5-47c74ead9c5f\") " pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.367876 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81701703-9dca-4d65-a4b5-47c74ead9c5f-catalog-content\") pod \"redhat-marketplace-7zwr6\" (UID: \"81701703-9dca-4d65-a4b5-47c74ead9c5f\") " pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.387679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnh98\" (UniqueName: \"kubernetes.io/projected/81701703-9dca-4d65-a4b5-47c74ead9c5f-kube-api-access-lnh98\") pod \"redhat-marketplace-7zwr6\" (UID: \"81701703-9dca-4d65-a4b5-47c74ead9c5f\") " pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.434706 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.533706 4768 patch_prober.go:28] interesting pod/router-default-5444994796-8hvbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 17:51:51 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 17:51:51 crc kubenswrapper[4768]: [+]process-running ok Nov 24 17:51:51 crc kubenswrapper[4768]: healthz check failed Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.533982 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8hvbs" podUID="b915353f-fcb8-4d2c-841f-a2091f2c7d96" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.559562 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.589248 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjmmt"] Nov 24 17:51:51 crc kubenswrapper[4768]: W1124 17:51:51.599393 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21aa9a49_fa80_4c66_97bb_bcd28c31aaef.slice/crio-9361c2dc7af9b1ce4195b740a8683eb9b4bac9e7f67fcddc8b4d6b9dd01f9e2e WatchSource:0}: Error finding container 9361c2dc7af9b1ce4195b740a8683eb9b4bac9e7f67fcddc8b4d6b9dd01f9e2e: Status 404 returned error can't find the container with id 9361c2dc7af9b1ce4195b740a8683eb9b4bac9e7f67fcddc8b4d6b9dd01f9e2e Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.743135 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-lbcxh" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.769828 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgt8t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.769877 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fgt8t" podUID="73cd8533-3450-46e3-89b9-6dd092750ef9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.770303 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgt8t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.770328 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fgt8t" podUID="73cd8533-3450-46e3-89b9-6dd092750ef9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.779173 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.793593 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"105926c2-41b3-4a78-a8a4-cdf09ac261dd","Type":"ContainerStarted","Data":"c11c3e7d39e6e0558c635ade000825ed62b19154a68d8fb7d40f9e47a3cbb84c"} Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.797535 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kv5d" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.805192 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" event={"ID":"3bd473c0-17b2-4d7c-830a-99afe5266762","Type":"ContainerStarted","Data":"b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512"} Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.806865 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.818086 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjmmt" event={"ID":"21aa9a49-fa80-4c66-97bb-bcd28c31aaef","Type":"ContainerStarted","Data":"9361c2dc7af9b1ce4195b740a8683eb9b4bac9e7f67fcddc8b4d6b9dd01f9e2e"} Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.895123 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" podStartSLOduration=130.895107869 podStartE2EDuration="2m10.895107869s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:51.892720027 +0000 UTC m=+150.753301804" watchObservedRunningTime="2025-11-24 17:51:51.895107869 +0000 UTC m=+150.755689646" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.921811 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.958640 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zwr6"] Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.981879 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:51 crc kubenswrapper[4768]: I1124 17:51:51.990909 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5sdcl" Nov 24 17:51:52 crc kubenswrapper[4768]: W1124 17:51:52.014061 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81701703_9dca_4d65_a4b5_47c74ead9c5f.slice/crio-3bdb81de892a4c279ca85cf52deeb7f7ed37dffc10bd310d4b4535975c785ada WatchSource:0}: Error finding container 3bdb81de892a4c279ca85cf52deeb7f7ed37dffc10bd310d4b4535975c785ada: Status 404 returned error can't find the container with id 3bdb81de892a4c279ca85cf52deeb7f7ed37dffc10bd310d4b4535975c785ada Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.137870 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.137915 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.138828 4768 patch_prober.go:28] interesting pod/console-f9d7485db-tj982 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.138875 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tj982" podUID="920a0317-09dd-43e5-b5a9-11feb6d3b37d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.198720 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cnkqp"] Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.199782 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.205041 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.206118 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cnkqp"] Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.303190 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-949ms\" (UniqueName: \"kubernetes.io/projected/7ef31d38-da28-4060-b917-2b2488e14067-kube-api-access-949ms\") pod \"redhat-operators-cnkqp\" (UID: \"7ef31d38-da28-4060-b917-2b2488e14067\") " pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.303247 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ef31d38-da28-4060-b917-2b2488e14067-catalog-content\") pod \"redhat-operators-cnkqp\" (UID: \"7ef31d38-da28-4060-b917-2b2488e14067\") " pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.303288 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ef31d38-da28-4060-b917-2b2488e14067-utilities\") pod \"redhat-operators-cnkqp\" (UID: \"7ef31d38-da28-4060-b917-2b2488e14067\") " pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.404834 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-949ms\" (UniqueName: \"kubernetes.io/projected/7ef31d38-da28-4060-b917-2b2488e14067-kube-api-access-949ms\") pod \"redhat-operators-cnkqp\" (UID: \"7ef31d38-da28-4060-b917-2b2488e14067\") " pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.405179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ef31d38-da28-4060-b917-2b2488e14067-catalog-content\") pod \"redhat-operators-cnkqp\" (UID: \"7ef31d38-da28-4060-b917-2b2488e14067\") " pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.405198 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ef31d38-da28-4060-b917-2b2488e14067-utilities\") pod \"redhat-operators-cnkqp\" (UID: \"7ef31d38-da28-4060-b917-2b2488e14067\") " pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.405697 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ef31d38-da28-4060-b917-2b2488e14067-utilities\") pod \"redhat-operators-cnkqp\" (UID: \"7ef31d38-da28-4060-b917-2b2488e14067\") " pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.405961 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ef31d38-da28-4060-b917-2b2488e14067-catalog-content\") pod \"redhat-operators-cnkqp\" (UID: \"7ef31d38-da28-4060-b917-2b2488e14067\") " pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.432068 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-949ms\" (UniqueName: \"kubernetes.io/projected/7ef31d38-da28-4060-b917-2b2488e14067-kube-api-access-949ms\") pod \"redhat-operators-cnkqp\" (UID: \"7ef31d38-da28-4060-b917-2b2488e14067\") " pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.530052 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.530142 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.533674 4768 patch_prober.go:28] interesting pod/router-default-5444994796-8hvbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 17:51:52 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 17:51:52 crc kubenswrapper[4768]: [+]process-running ok Nov 24 17:51:52 crc kubenswrapper[4768]: healthz check failed Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.533757 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8hvbs" podUID="b915353f-fcb8-4d2c-841f-a2091f2c7d96" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.549895 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.601474 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5z9qb"] Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.602435 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.610812 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2zpp\" (UniqueName: \"kubernetes.io/projected/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-kube-api-access-r2zpp\") pod \"redhat-operators-5z9qb\" (UID: \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\") " pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.610867 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-utilities\") pod \"redhat-operators-5z9qb\" (UID: \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\") " pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.610922 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-catalog-content\") pod \"redhat-operators-5z9qb\" (UID: \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\") " pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.613695 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5z9qb"] Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.713273 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2zpp\" (UniqueName: \"kubernetes.io/projected/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-kube-api-access-r2zpp\") pod \"redhat-operators-5z9qb\" (UID: \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\") " pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.713337 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-utilities\") pod \"redhat-operators-5z9qb\" (UID: \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\") " pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.713361 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-catalog-content\") pod \"redhat-operators-5z9qb\" (UID: \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\") " pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.714219 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-catalog-content\") pod \"redhat-operators-5z9qb\" (UID: \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\") " pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.714265 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-utilities\") pod \"redhat-operators-5z9qb\" (UID: \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\") " pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.736800 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2zpp\" (UniqueName: \"kubernetes.io/projected/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-kube-api-access-r2zpp\") pod \"redhat-operators-5z9qb\" (UID: \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\") " pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.814298 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cnkqp"] Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.827827 4768 generic.go:334] "Generic (PLEG): container finished" podID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" containerID="691a97bfc5ba991cd0822c00b217b9ce229ac3e09cdd462e018cae0171fb0dfe" exitCode=0 Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.827911 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjmmt" event={"ID":"21aa9a49-fa80-4c66-97bb-bcd28c31aaef","Type":"ContainerDied","Data":"691a97bfc5ba991cd0822c00b217b9ce229ac3e09cdd462e018cae0171fb0dfe"} Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.833177 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"105926c2-41b3-4a78-a8a4-cdf09ac261dd","Type":"ContainerStarted","Data":"07d423821c293aef45a41c5b1f7c13f830240e0065bc3303da29c821806239de"} Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.835542 4768 generic.go:334] "Generic (PLEG): container finished" podID="81701703-9dca-4d65-a4b5-47c74ead9c5f" containerID="7a407368b4a9ea7c41a720463e3137d473a2e8da50b32888f508982aefe23a21" exitCode=0 Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.836596 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zwr6" event={"ID":"81701703-9dca-4d65-a4b5-47c74ead9c5f","Type":"ContainerDied","Data":"7a407368b4a9ea7c41a720463e3137d473a2e8da50b32888f508982aefe23a21"} Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.836622 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zwr6" event={"ID":"81701703-9dca-4d65-a4b5-47c74ead9c5f","Type":"ContainerStarted","Data":"3bdb81de892a4c279ca85cf52deeb7f7ed37dffc10bd310d4b4535975c785ada"} Nov 24 17:51:52 crc kubenswrapper[4768]: W1124 17:51:52.858877 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ef31d38_da28_4060_b917_2b2488e14067.slice/crio-c4002352a1b1d4595ef7aa71318e95a6d49e4ed9566ac3a09be5882b14d885c2 WatchSource:0}: Error finding container c4002352a1b1d4595ef7aa71318e95a6d49e4ed9566ac3a09be5882b14d885c2: Status 404 returned error can't find the container with id c4002352a1b1d4595ef7aa71318e95a6d49e4ed9566ac3a09be5882b14d885c2 Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.890156 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.890022142 podStartE2EDuration="2.890022142s" podCreationTimestamp="2025-11-24 17:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:51:52.870422616 +0000 UTC m=+151.731004393" watchObservedRunningTime="2025-11-24 17:51:52.890022142 +0000 UTC m=+151.750603919" Nov 24 17:51:52 crc kubenswrapper[4768]: I1124 17:51:52.942616 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.139715 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5z9qb"] Nov 24 17:51:53 crc kubenswrapper[4768]: W1124 17:51:53.198922 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f663eea_de7c_4f55_92e4_2ffbc5c6b5a7.slice/crio-8e26730ef236dad64cb2f58717c2389ef109daa794e261f5a42059e0c6bd3872 WatchSource:0}: Error finding container 8e26730ef236dad64cb2f58717c2389ef109daa794e261f5a42059e0c6bd3872: Status 404 returned error can't find the container with id 8e26730ef236dad64cb2f58717c2389ef109daa794e261f5a42059e0c6bd3872 Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.317659 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.320084 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.322720 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.322944 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.333127 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.427272 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6c38c11-3667-4a14-82d1-c8dbabc968d2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a6c38c11-3667-4a14-82d1-c8dbabc968d2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.427407 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6c38c11-3667-4a14-82d1-c8dbabc968d2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a6c38c11-3667-4a14-82d1-c8dbabc968d2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.528611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6c38c11-3667-4a14-82d1-c8dbabc968d2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a6c38c11-3667-4a14-82d1-c8dbabc968d2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.528716 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6c38c11-3667-4a14-82d1-c8dbabc968d2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a6c38c11-3667-4a14-82d1-c8dbabc968d2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.528875 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6c38c11-3667-4a14-82d1-c8dbabc968d2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a6c38c11-3667-4a14-82d1-c8dbabc968d2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.534090 4768 patch_prober.go:28] interesting pod/router-default-5444994796-8hvbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 17:51:53 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 17:51:53 crc kubenswrapper[4768]: [+]process-running ok Nov 24 17:51:53 crc kubenswrapper[4768]: healthz check failed Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.534156 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8hvbs" podUID="b915353f-fcb8-4d2c-841f-a2091f2c7d96" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.547160 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6c38c11-3667-4a14-82d1-c8dbabc968d2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a6c38c11-3667-4a14-82d1-c8dbabc968d2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.684113 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.858286 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5z9qb" event={"ID":"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7","Type":"ContainerStarted","Data":"8e26730ef236dad64cb2f58717c2389ef109daa794e261f5a42059e0c6bd3872"} Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.881764 4768 generic.go:334] "Generic (PLEG): container finished" podID="105926c2-41b3-4a78-a8a4-cdf09ac261dd" containerID="07d423821c293aef45a41c5b1f7c13f830240e0065bc3303da29c821806239de" exitCode=0 Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.881839 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"105926c2-41b3-4a78-a8a4-cdf09ac261dd","Type":"ContainerDied","Data":"07d423821c293aef45a41c5b1f7c13f830240e0065bc3303da29c821806239de"} Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.885238 4768 generic.go:334] "Generic (PLEG): container finished" podID="7ef31d38-da28-4060-b917-2b2488e14067" containerID="8612189cf5769aa4220ab8ea460bf2378e6710f77a479ac3e9048eec795062f2" exitCode=0 Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.886098 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkqp" event={"ID":"7ef31d38-da28-4060-b917-2b2488e14067","Type":"ContainerDied","Data":"8612189cf5769aa4220ab8ea460bf2378e6710f77a479ac3e9048eec795062f2"} Nov 24 17:51:53 crc kubenswrapper[4768]: I1124 17:51:53.886123 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkqp" event={"ID":"7ef31d38-da28-4060-b917-2b2488e14067","Type":"ContainerStarted","Data":"c4002352a1b1d4595ef7aa71318e95a6d49e4ed9566ac3a09be5882b14d885c2"} Nov 24 17:51:54 crc kubenswrapper[4768]: I1124 17:51:54.185847 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 17:51:54 crc kubenswrapper[4768]: W1124 17:51:54.246782 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda6c38c11_3667_4a14_82d1_c8dbabc968d2.slice/crio-075093db0650a26d57794acd3b7f0edc476f304ef142a17c9175f5f490e8035a WatchSource:0}: Error finding container 075093db0650a26d57794acd3b7f0edc476f304ef142a17c9175f5f490e8035a: Status 404 returned error can't find the container with id 075093db0650a26d57794acd3b7f0edc476f304ef142a17c9175f5f490e8035a Nov 24 17:51:54 crc kubenswrapper[4768]: I1124 17:51:54.535629 4768 patch_prober.go:28] interesting pod/router-default-5444994796-8hvbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 17:51:54 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 17:51:54 crc kubenswrapper[4768]: [+]process-running ok Nov 24 17:51:54 crc kubenswrapper[4768]: healthz check failed Nov 24 17:51:54 crc kubenswrapper[4768]: I1124 17:51:54.536091 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8hvbs" podUID="b915353f-fcb8-4d2c-841f-a2091f2c7d96" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 17:51:54 crc kubenswrapper[4768]: I1124 17:51:54.897509 4768 generic.go:334] "Generic (PLEG): container finished" podID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" containerID="5273bd5b06ca0fcb9d894a79c052eb77b3d16cdf14641767419982525eaa34bc" exitCode=0 Nov 24 17:51:54 crc kubenswrapper[4768]: I1124 17:51:54.897529 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5z9qb" event={"ID":"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7","Type":"ContainerDied","Data":"5273bd5b06ca0fcb9d894a79c052eb77b3d16cdf14641767419982525eaa34bc"} Nov 24 17:51:54 crc kubenswrapper[4768]: I1124 17:51:54.899686 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a6c38c11-3667-4a14-82d1-c8dbabc968d2","Type":"ContainerStarted","Data":"075093db0650a26d57794acd3b7f0edc476f304ef142a17c9175f5f490e8035a"} Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.301514 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.467964 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/105926c2-41b3-4a78-a8a4-cdf09ac261dd-kube-api-access\") pod \"105926c2-41b3-4a78-a8a4-cdf09ac261dd\" (UID: \"105926c2-41b3-4a78-a8a4-cdf09ac261dd\") " Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.468041 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/105926c2-41b3-4a78-a8a4-cdf09ac261dd-kubelet-dir\") pod \"105926c2-41b3-4a78-a8a4-cdf09ac261dd\" (UID: \"105926c2-41b3-4a78-a8a4-cdf09ac261dd\") " Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.468318 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/105926c2-41b3-4a78-a8a4-cdf09ac261dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "105926c2-41b3-4a78-a8a4-cdf09ac261dd" (UID: "105926c2-41b3-4a78-a8a4-cdf09ac261dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.478762 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/105926c2-41b3-4a78-a8a4-cdf09ac261dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "105926c2-41b3-4a78-a8a4-cdf09ac261dd" (UID: "105926c2-41b3-4a78-a8a4-cdf09ac261dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.539530 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.545730 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-8hvbs" Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.569748 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/105926c2-41b3-4a78-a8a4-cdf09ac261dd-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.569783 4768 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/105926c2-41b3-4a78-a8a4-cdf09ac261dd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.912339 4768 generic.go:334] "Generic (PLEG): container finished" podID="a6c38c11-3667-4a14-82d1-c8dbabc968d2" containerID="2d1c74fa568bccbcd1626babb5ebc6b4eeb372146cd2ab77a17a1987dc9b78d9" exitCode=0 Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.912432 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a6c38c11-3667-4a14-82d1-c8dbabc968d2","Type":"ContainerDied","Data":"2d1c74fa568bccbcd1626babb5ebc6b4eeb372146cd2ab77a17a1987dc9b78d9"} Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.918430 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.918808 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"105926c2-41b3-4a78-a8a4-cdf09ac261dd","Type":"ContainerDied","Data":"c11c3e7d39e6e0558c635ade000825ed62b19154a68d8fb7d40f9e47a3cbb84c"} Nov 24 17:51:55 crc kubenswrapper[4768]: I1124 17:51:55.918829 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c11c3e7d39e6e0558c635ade000825ed62b19154a68d8fb7d40f9e47a3cbb84c" Nov 24 17:51:57 crc kubenswrapper[4768]: I1124 17:51:57.952312 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fph7m" Nov 24 17:52:01 crc kubenswrapper[4768]: I1124 17:52:01.768981 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgt8t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 17:52:01 crc kubenswrapper[4768]: I1124 17:52:01.770261 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fgt8t" podUID="73cd8533-3450-46e3-89b9-6dd092750ef9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 17:52:01 crc kubenswrapper[4768]: I1124 17:52:01.769155 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-fgt8t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 17:52:01 crc kubenswrapper[4768]: I1124 17:52:01.770451 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fgt8t" podUID="73cd8533-3450-46e3-89b9-6dd092750ef9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 17:52:02 crc kubenswrapper[4768]: I1124 17:52:02.095794 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 17:52:02 crc kubenswrapper[4768]: I1124 17:52:02.195609 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:52:02 crc kubenswrapper[4768]: I1124 17:52:02.199533 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-tj982" Nov 24 17:52:02 crc kubenswrapper[4768]: I1124 17:52:02.272440 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6c38c11-3667-4a14-82d1-c8dbabc968d2-kube-api-access\") pod \"a6c38c11-3667-4a14-82d1-c8dbabc968d2\" (UID: \"a6c38c11-3667-4a14-82d1-c8dbabc968d2\") " Nov 24 17:52:02 crc kubenswrapper[4768]: I1124 17:52:02.272575 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6c38c11-3667-4a14-82d1-c8dbabc968d2-kubelet-dir\") pod \"a6c38c11-3667-4a14-82d1-c8dbabc968d2\" (UID: \"a6c38c11-3667-4a14-82d1-c8dbabc968d2\") " Nov 24 17:52:02 crc kubenswrapper[4768]: I1124 17:52:02.272722 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c38c11-3667-4a14-82d1-c8dbabc968d2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a6c38c11-3667-4a14-82d1-c8dbabc968d2" (UID: "a6c38c11-3667-4a14-82d1-c8dbabc968d2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:52:02 crc kubenswrapper[4768]: I1124 17:52:02.272978 4768 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6c38c11-3667-4a14-82d1-c8dbabc968d2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 17:52:02 crc kubenswrapper[4768]: I1124 17:52:02.277571 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c38c11-3667-4a14-82d1-c8dbabc968d2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a6c38c11-3667-4a14-82d1-c8dbabc968d2" (UID: "a6c38c11-3667-4a14-82d1-c8dbabc968d2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:52:02 crc kubenswrapper[4768]: I1124 17:52:02.374515 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6c38c11-3667-4a14-82d1-c8dbabc968d2-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 17:52:02 crc kubenswrapper[4768]: I1124 17:52:02.961795 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 17:52:02 crc kubenswrapper[4768]: I1124 17:52:02.961823 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a6c38c11-3667-4a14-82d1-c8dbabc968d2","Type":"ContainerDied","Data":"075093db0650a26d57794acd3b7f0edc476f304ef142a17c9175f5f490e8035a"} Nov 24 17:52:02 crc kubenswrapper[4768]: I1124 17:52:02.961871 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="075093db0650a26d57794acd3b7f0edc476f304ef142a17c9175f5f490e8035a" Nov 24 17:52:03 crc kubenswrapper[4768]: I1124 17:52:03.289937 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:52:03 crc kubenswrapper[4768]: I1124 17:52:03.297438 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b50668f2-0a0b-40f4-9a38-3df082cf931e-metrics-certs\") pod \"network-metrics-daemon-hpd8h\" (UID: \"b50668f2-0a0b-40f4-9a38-3df082cf931e\") " pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:52:03 crc kubenswrapper[4768]: I1124 17:52:03.329325 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hpd8h" Nov 24 17:52:10 crc kubenswrapper[4768]: I1124 17:52:10.399766 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:52:11 crc kubenswrapper[4768]: I1124 17:52:11.792926 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fgt8t" Nov 24 17:52:13 crc kubenswrapper[4768]: I1124 17:52:13.656024 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:52:13 crc kubenswrapper[4768]: I1124 17:52:13.656421 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:52:18 crc kubenswrapper[4768]: E1124 17:52:18.375767 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 17:52:18 crc kubenswrapper[4768]: E1124 17:52:18.376015 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2p4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2k6bn_openshift-marketplace(e7d38bf6-7bd5-468e-ac9d-508e8aea36b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 17:52:18 crc kubenswrapper[4768]: E1124 17:52:18.378593 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2k6bn" podUID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" Nov 24 17:52:20 crc kubenswrapper[4768]: I1124 17:52:20.267683 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-qd5vx" podUID="b9a660e1-b6fc-40e7-a6d9-587f312ea140" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 17:52:22 crc kubenswrapper[4768]: E1124 17:52:22.833184 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2k6bn" podUID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" Nov 24 17:52:22 crc kubenswrapper[4768]: I1124 17:52:22.863184 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7frgb" Nov 24 17:52:22 crc kubenswrapper[4768]: E1124 17:52:22.948184 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 17:52:22 crc kubenswrapper[4768]: E1124 17:52:22.948327 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5z6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-x8424_openshift-marketplace(a22825e1-d87e-48cf-b169-7d1360923af4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 17:52:22 crc kubenswrapper[4768]: E1124 17:52:22.949545 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-x8424" podUID="a22825e1-d87e-48cf-b169-7d1360923af4" Nov 24 17:52:23 crc kubenswrapper[4768]: E1124 17:52:23.069047 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 17:52:23 crc kubenswrapper[4768]: E1124 17:52:23.069319 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgcqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vzqw9_openshift-marketplace(17fb1883-b4da-4e64-b27a-fdf11ff21ac2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 17:52:23 crc kubenswrapper[4768]: E1124 17:52:23.070579 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vzqw9" podUID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" Nov 24 17:52:24 crc kubenswrapper[4768]: E1124 17:52:24.564298 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vzqw9" podUID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" Nov 24 17:52:24 crc kubenswrapper[4768]: E1124 17:52:24.565038 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-x8424" podUID="a22825e1-d87e-48cf-b169-7d1360923af4" Nov 24 17:52:24 crc kubenswrapper[4768]: E1124 17:52:24.586162 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 17:52:24 crc kubenswrapper[4768]: E1124 17:52:24.586458 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8vbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tjmmt_openshift-marketplace(21aa9a49-fa80-4c66-97bb-bcd28c31aaef): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 17:52:24 crc kubenswrapper[4768]: E1124 17:52:24.587732 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tjmmt" podUID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" Nov 24 17:52:24 crc kubenswrapper[4768]: E1124 17:52:24.620960 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 17:52:24 crc kubenswrapper[4768]: E1124 17:52:24.621252 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlf7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gp2qx_openshift-marketplace(5fa06755-0386-4960-9adc-258106178fca): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 17:52:24 crc kubenswrapper[4768]: E1124 17:52:24.623804 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gp2qx" podUID="5fa06755-0386-4960-9adc-258106178fca" Nov 24 17:52:24 crc kubenswrapper[4768]: E1124 17:52:24.642445 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 17:52:24 crc kubenswrapper[4768]: E1124 17:52:24.642650 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnh98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-7zwr6_openshift-marketplace(81701703-9dca-4d65-a4b5-47c74ead9c5f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 17:52:24 crc kubenswrapper[4768]: E1124 17:52:24.643830 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-7zwr6" podUID="81701703-9dca-4d65-a4b5-47c74ead9c5f" Nov 24 17:52:27 crc kubenswrapper[4768]: E1124 17:52:27.293983 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gp2qx" podUID="5fa06755-0386-4960-9adc-258106178fca" Nov 24 17:52:27 crc kubenswrapper[4768]: E1124 17:52:27.294000 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-tjmmt" podUID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" Nov 24 17:52:27 crc kubenswrapper[4768]: E1124 17:52:27.294190 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-7zwr6" podUID="81701703-9dca-4d65-a4b5-47c74ead9c5f" Nov 24 17:52:27 crc kubenswrapper[4768]: I1124 17:52:27.674138 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hpd8h"] Nov 24 17:52:27 crc kubenswrapper[4768]: W1124 17:52:27.681443 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb50668f2_0a0b_40f4_9a38_3df082cf931e.slice/crio-7b94b81ff7618017de5ce567a15088f9b123dd8337f9fdb208e92fb5c13aad5d WatchSource:0}: Error finding container 7b94b81ff7618017de5ce567a15088f9b123dd8337f9fdb208e92fb5c13aad5d: Status 404 returned error can't find the container with id 7b94b81ff7618017de5ce567a15088f9b123dd8337f9fdb208e92fb5c13aad5d Nov 24 17:52:28 crc kubenswrapper[4768]: I1124 17:52:28.318141 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" event={"ID":"b50668f2-0a0b-40f4-9a38-3df082cf931e","Type":"ContainerStarted","Data":"f1900aaf168a700c2f69608645ba417b445b80723c500a4de343a880c14c1a51"} Nov 24 17:52:28 crc kubenswrapper[4768]: I1124 17:52:28.319434 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" event={"ID":"b50668f2-0a0b-40f4-9a38-3df082cf931e","Type":"ContainerStarted","Data":"b56d34f15770eabedb41bb78a89d4539f328bc2a966ad7df9bdb28d5852b85d4"} Nov 24 17:52:28 crc kubenswrapper[4768]: I1124 17:52:28.319547 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hpd8h" event={"ID":"b50668f2-0a0b-40f4-9a38-3df082cf931e","Type":"ContainerStarted","Data":"7b94b81ff7618017de5ce567a15088f9b123dd8337f9fdb208e92fb5c13aad5d"} Nov 24 17:52:28 crc kubenswrapper[4768]: I1124 17:52:28.322577 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkqp" event={"ID":"7ef31d38-da28-4060-b917-2b2488e14067","Type":"ContainerDied","Data":"b355b66edffc87f4b8caf11b436ad330a4c0326c0510effb8667cae92b7eb06d"} Nov 24 17:52:28 crc kubenswrapper[4768]: I1124 17:52:28.322960 4768 generic.go:334] "Generic (PLEG): container finished" podID="7ef31d38-da28-4060-b917-2b2488e14067" containerID="b355b66edffc87f4b8caf11b436ad330a4c0326c0510effb8667cae92b7eb06d" exitCode=0 Nov 24 17:52:28 crc kubenswrapper[4768]: I1124 17:52:28.325166 4768 generic.go:334] "Generic (PLEG): container finished" podID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" containerID="80027f918dc17129e8bc15e80a7b14f9733e73a79f66eaee003a72677f3ce1e9" exitCode=0 Nov 24 17:52:28 crc kubenswrapper[4768]: I1124 17:52:28.325390 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5z9qb" event={"ID":"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7","Type":"ContainerDied","Data":"80027f918dc17129e8bc15e80a7b14f9733e73a79f66eaee003a72677f3ce1e9"} Nov 24 17:52:28 crc kubenswrapper[4768]: I1124 17:52:28.343361 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hpd8h" podStartSLOduration=167.343339171 podStartE2EDuration="2m47.343339171s" podCreationTimestamp="2025-11-24 17:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:52:28.337552682 +0000 UTC m=+187.198134479" watchObservedRunningTime="2025-11-24 17:52:28.343339171 +0000 UTC m=+187.203920968" Nov 24 17:52:29 crc kubenswrapper[4768]: I1124 17:52:29.253977 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 17:52:29 crc kubenswrapper[4768]: I1124 17:52:29.333446 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkqp" event={"ID":"7ef31d38-da28-4060-b917-2b2488e14067","Type":"ContainerStarted","Data":"c4e7b35da1dea43059fa4db6d4567f066e9b4405a4488b168be0502c5d564db9"} Nov 24 17:52:29 crc kubenswrapper[4768]: I1124 17:52:29.336647 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5z9qb" event={"ID":"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7","Type":"ContainerStarted","Data":"07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947"} Nov 24 17:52:29 crc kubenswrapper[4768]: I1124 17:52:29.355045 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cnkqp" podStartSLOduration=2.588091334 podStartE2EDuration="37.355022136s" podCreationTimestamp="2025-11-24 17:51:52 +0000 UTC" firstStartedPulling="2025-11-24 17:51:53.92123244 +0000 UTC m=+152.781814217" lastFinishedPulling="2025-11-24 17:52:28.688163232 +0000 UTC m=+187.548745019" observedRunningTime="2025-11-24 17:52:29.35204628 +0000 UTC m=+188.212628047" watchObservedRunningTime="2025-11-24 17:52:29.355022136 +0000 UTC m=+188.215603913" Nov 24 17:52:29 crc kubenswrapper[4768]: I1124 17:52:29.382095 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5z9qb" podStartSLOduration=3.499842593 podStartE2EDuration="37.382074384s" podCreationTimestamp="2025-11-24 17:51:52 +0000 UTC" firstStartedPulling="2025-11-24 17:51:54.902110511 +0000 UTC m=+153.762692288" lastFinishedPulling="2025-11-24 17:52:28.784342302 +0000 UTC m=+187.644924079" observedRunningTime="2025-11-24 17:52:29.380457272 +0000 UTC m=+188.241039049" watchObservedRunningTime="2025-11-24 17:52:29.382074384 +0000 UTC m=+188.242656171" Nov 24 17:52:32 crc kubenswrapper[4768]: I1124 17:52:32.551636 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:52:32 crc kubenswrapper[4768]: I1124 17:52:32.552203 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:52:32 crc kubenswrapper[4768]: I1124 17:52:32.942817 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:52:32 crc kubenswrapper[4768]: I1124 17:52:32.943447 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:52:33 crc kubenswrapper[4768]: I1124 17:52:33.694977 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cnkqp" podUID="7ef31d38-da28-4060-b917-2b2488e14067" containerName="registry-server" probeResult="failure" output=< Nov 24 17:52:33 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Nov 24 17:52:33 crc kubenswrapper[4768]: > Nov 24 17:52:33 crc kubenswrapper[4768]: I1124 17:52:33.988055 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5z9qb" podUID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" containerName="registry-server" probeResult="failure" output=< Nov 24 17:52:33 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Nov 24 17:52:33 crc kubenswrapper[4768]: > Nov 24 17:52:37 crc kubenswrapper[4768]: I1124 17:52:37.372863 4768 generic.go:334] "Generic (PLEG): container finished" podID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" containerID="68069a32fdde0b85202576dc0fb9895d8d97fdfaf9a5ce8ebc64a29ce3f5ff64" exitCode=0 Nov 24 17:52:37 crc kubenswrapper[4768]: I1124 17:52:37.372951 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzqw9" event={"ID":"17fb1883-b4da-4e64-b27a-fdf11ff21ac2","Type":"ContainerDied","Data":"68069a32fdde0b85202576dc0fb9895d8d97fdfaf9a5ce8ebc64a29ce3f5ff64"} Nov 24 17:52:37 crc kubenswrapper[4768]: I1124 17:52:37.377662 4768 generic.go:334] "Generic (PLEG): container finished" podID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" containerID="973775bc7e4cebe4ea01a062fea0d8478c3dae2a785f9064e2ae658a8eeeb38f" exitCode=0 Nov 24 17:52:37 crc kubenswrapper[4768]: I1124 17:52:37.377701 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k6bn" event={"ID":"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9","Type":"ContainerDied","Data":"973775bc7e4cebe4ea01a062fea0d8478c3dae2a785f9064e2ae658a8eeeb38f"} Nov 24 17:52:38 crc kubenswrapper[4768]: I1124 17:52:38.387833 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzqw9" event={"ID":"17fb1883-b4da-4e64-b27a-fdf11ff21ac2","Type":"ContainerStarted","Data":"5165c75895d94d66551a3a764c7f2d987ff8c54d2dc4a35effcdf96ead4412bd"} Nov 24 17:52:38 crc kubenswrapper[4768]: I1124 17:52:38.391169 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k6bn" event={"ID":"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9","Type":"ContainerStarted","Data":"3c2667a0b83cf48fb96398dce792c08c76eac0d61b1cdc2b318a7fa388b1391f"} Nov 24 17:52:38 crc kubenswrapper[4768]: I1124 17:52:38.427376 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vzqw9" podStartSLOduration=3.255825995 podStartE2EDuration="50.427356662s" podCreationTimestamp="2025-11-24 17:51:48 +0000 UTC" firstStartedPulling="2025-11-24 17:51:50.717863195 +0000 UTC m=+149.578444972" lastFinishedPulling="2025-11-24 17:52:37.889393862 +0000 UTC m=+196.749975639" observedRunningTime="2025-11-24 17:52:38.409809268 +0000 UTC m=+197.270391065" watchObservedRunningTime="2025-11-24 17:52:38.427356662 +0000 UTC m=+197.287938459" Nov 24 17:52:39 crc kubenswrapper[4768]: I1124 17:52:39.127913 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:52:39 crc kubenswrapper[4768]: I1124 17:52:39.128179 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:52:39 crc kubenswrapper[4768]: I1124 17:52:39.404255 4768 generic.go:334] "Generic (PLEG): container finished" podID="a22825e1-d87e-48cf-b169-7d1360923af4" containerID="ea8f31b8bba31a3efb38f395734c7ace91156b704587bbef6b0df43ae7af57e2" exitCode=0 Nov 24 17:52:39 crc kubenswrapper[4768]: I1124 17:52:39.404338 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8424" event={"ID":"a22825e1-d87e-48cf-b169-7d1360923af4","Type":"ContainerDied","Data":"ea8f31b8bba31a3efb38f395734c7ace91156b704587bbef6b0df43ae7af57e2"} Nov 24 17:52:39 crc kubenswrapper[4768]: I1124 17:52:39.432309 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2k6bn" podStartSLOduration=3.356872225 podStartE2EDuration="50.432291403s" podCreationTimestamp="2025-11-24 17:51:49 +0000 UTC" firstStartedPulling="2025-11-24 17:51:50.72191837 +0000 UTC m=+149.582500147" lastFinishedPulling="2025-11-24 17:52:37.797337548 +0000 UTC m=+196.657919325" observedRunningTime="2025-11-24 17:52:38.427472356 +0000 UTC m=+197.288054133" watchObservedRunningTime="2025-11-24 17:52:39.432291403 +0000 UTC m=+198.292873180" Nov 24 17:52:39 crc kubenswrapper[4768]: I1124 17:52:39.523803 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:52:39 crc kubenswrapper[4768]: I1124 17:52:39.523859 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:52:39 crc kubenswrapper[4768]: I1124 17:52:39.585813 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:52:40 crc kubenswrapper[4768]: I1124 17:52:40.165099 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vzqw9" podUID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" containerName="registry-server" probeResult="failure" output=< Nov 24 17:52:40 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Nov 24 17:52:40 crc kubenswrapper[4768]: > Nov 24 17:52:41 crc kubenswrapper[4768]: I1124 17:52:41.314053 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-745nn"] Nov 24 17:52:42 crc kubenswrapper[4768]: I1124 17:52:42.592916 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:52:42 crc kubenswrapper[4768]: I1124 17:52:42.629294 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:52:42 crc kubenswrapper[4768]: I1124 17:52:42.983444 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:52:43 crc kubenswrapper[4768]: I1124 17:52:43.018334 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:52:43 crc kubenswrapper[4768]: I1124 17:52:43.656412 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:52:43 crc kubenswrapper[4768]: I1124 17:52:43.656468 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:52:43 crc kubenswrapper[4768]: I1124 17:52:43.656537 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:52:43 crc kubenswrapper[4768]: I1124 17:52:43.657121 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 17:52:43 crc kubenswrapper[4768]: I1124 17:52:43.657220 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50" gracePeriod=600 Nov 24 17:52:44 crc kubenswrapper[4768]: I1124 17:52:44.426971 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50" exitCode=0 Nov 24 17:52:44 crc kubenswrapper[4768]: I1124 17:52:44.427047 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50"} Nov 24 17:52:44 crc kubenswrapper[4768]: I1124 17:52:44.429373 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8424" event={"ID":"a22825e1-d87e-48cf-b169-7d1360923af4","Type":"ContainerStarted","Data":"b5f2cdb4a902253311831ffa20ba8ed2c1f5cc1b4cba151d29e6c7acfee8b219"} Nov 24 17:52:44 crc kubenswrapper[4768]: I1124 17:52:44.450961 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x8424" podStartSLOduration=4.393668315 podStartE2EDuration="56.450941565s" podCreationTimestamp="2025-11-24 17:51:48 +0000 UTC" firstStartedPulling="2025-11-24 17:51:50.765629237 +0000 UTC m=+149.626211014" lastFinishedPulling="2025-11-24 17:52:42.822902487 +0000 UTC m=+201.683484264" observedRunningTime="2025-11-24 17:52:44.449693718 +0000 UTC m=+203.310275505" watchObservedRunningTime="2025-11-24 17:52:44.450941565 +0000 UTC m=+203.311523352" Nov 24 17:52:45 crc kubenswrapper[4768]: I1124 17:52:45.731223 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5z9qb"] Nov 24 17:52:45 crc kubenswrapper[4768]: I1124 17:52:45.731771 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5z9qb" podUID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" containerName="registry-server" containerID="cri-o://07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947" gracePeriod=2 Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.090586 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.121554 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2zpp\" (UniqueName: \"kubernetes.io/projected/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-kube-api-access-r2zpp\") pod \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\" (UID: \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\") " Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.121616 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-catalog-content\") pod \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\" (UID: \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\") " Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.121641 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-utilities\") pod \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\" (UID: \"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7\") " Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.122743 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-utilities" (OuterVolumeSpecName: "utilities") pod "3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" (UID: "3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.128336 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-kube-api-access-r2zpp" (OuterVolumeSpecName: "kube-api-access-r2zpp") pod "3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" (UID: "3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7"). InnerVolumeSpecName "kube-api-access-r2zpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.223475 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2zpp\" (UniqueName: \"kubernetes.io/projected/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-kube-api-access-r2zpp\") on node \"crc\" DevicePath \"\"" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.223528 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.233903 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" (UID: "3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.324374 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.446178 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"886b914367f5a2fa9c56278a5ec2fd4868e1d9e80fd680b439865ae06b105406"} Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.448222 4768 generic.go:334] "Generic (PLEG): container finished" podID="81701703-9dca-4d65-a4b5-47c74ead9c5f" containerID="59962bd71a9ececae41221e7d37a629bbd67c7de6f401c30483a5cd732c0e748" exitCode=0 Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.448300 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zwr6" event={"ID":"81701703-9dca-4d65-a4b5-47c74ead9c5f","Type":"ContainerDied","Data":"59962bd71a9ececae41221e7d37a629bbd67c7de6f401c30483a5cd732c0e748"} Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.450426 4768 generic.go:334] "Generic (PLEG): container finished" podID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" containerID="830404e2718344d36d4ccb6d59351c8226cbb776b3c2c41c96ed7e1d6fd352e7" exitCode=0 Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.450519 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjmmt" event={"ID":"21aa9a49-fa80-4c66-97bb-bcd28c31aaef","Type":"ContainerDied","Data":"830404e2718344d36d4ccb6d59351c8226cbb776b3c2c41c96ed7e1d6fd352e7"} Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.453106 4768 generic.go:334] "Generic (PLEG): container finished" podID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" containerID="07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947" exitCode=0 Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.453148 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5z9qb" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.453164 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5z9qb" event={"ID":"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7","Type":"ContainerDied","Data":"07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947"} Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.453187 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5z9qb" event={"ID":"3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7","Type":"ContainerDied","Data":"8e26730ef236dad64cb2f58717c2389ef109daa794e261f5a42059e0c6bd3872"} Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.453207 4768 scope.go:117] "RemoveContainer" containerID="07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.457407 4768 generic.go:334] "Generic (PLEG): container finished" podID="5fa06755-0386-4960-9adc-258106178fca" containerID="d7dc4fa5bbb79ac6f3ffb0f20895ca005e631cd9687b8e171530301ae0af7803" exitCode=0 Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.457444 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gp2qx" event={"ID":"5fa06755-0386-4960-9adc-258106178fca","Type":"ContainerDied","Data":"d7dc4fa5bbb79ac6f3ffb0f20895ca005e631cd9687b8e171530301ae0af7803"} Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.474532 4768 scope.go:117] "RemoveContainer" containerID="80027f918dc17129e8bc15e80a7b14f9733e73a79f66eaee003a72677f3ce1e9" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.490715 4768 scope.go:117] "RemoveContainer" containerID="5273bd5b06ca0fcb9d894a79c052eb77b3d16cdf14641767419982525eaa34bc" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.519228 4768 scope.go:117] "RemoveContainer" containerID="07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947" Nov 24 17:52:46 crc kubenswrapper[4768]: E1124 17:52:46.519677 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947\": container with ID starting with 07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947 not found: ID does not exist" containerID="07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.519718 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947"} err="failed to get container status \"07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947\": rpc error: code = NotFound desc = could not find container \"07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947\": container with ID starting with 07eb14233cde1efd4d7b48b96411c69c4c0efdb51b87bd7e86f4a3c6a95ac947 not found: ID does not exist" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.519744 4768 scope.go:117] "RemoveContainer" containerID="80027f918dc17129e8bc15e80a7b14f9733e73a79f66eaee003a72677f3ce1e9" Nov 24 17:52:46 crc kubenswrapper[4768]: E1124 17:52:46.520105 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80027f918dc17129e8bc15e80a7b14f9733e73a79f66eaee003a72677f3ce1e9\": container with ID starting with 80027f918dc17129e8bc15e80a7b14f9733e73a79f66eaee003a72677f3ce1e9 not found: ID does not exist" containerID="80027f918dc17129e8bc15e80a7b14f9733e73a79f66eaee003a72677f3ce1e9" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.520151 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80027f918dc17129e8bc15e80a7b14f9733e73a79f66eaee003a72677f3ce1e9"} err="failed to get container status \"80027f918dc17129e8bc15e80a7b14f9733e73a79f66eaee003a72677f3ce1e9\": rpc error: code = NotFound desc = could not find container \"80027f918dc17129e8bc15e80a7b14f9733e73a79f66eaee003a72677f3ce1e9\": container with ID starting with 80027f918dc17129e8bc15e80a7b14f9733e73a79f66eaee003a72677f3ce1e9 not found: ID does not exist" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.520181 4768 scope.go:117] "RemoveContainer" containerID="5273bd5b06ca0fcb9d894a79c052eb77b3d16cdf14641767419982525eaa34bc" Nov 24 17:52:46 crc kubenswrapper[4768]: E1124 17:52:46.521709 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5273bd5b06ca0fcb9d894a79c052eb77b3d16cdf14641767419982525eaa34bc\": container with ID starting with 5273bd5b06ca0fcb9d894a79c052eb77b3d16cdf14641767419982525eaa34bc not found: ID does not exist" containerID="5273bd5b06ca0fcb9d894a79c052eb77b3d16cdf14641767419982525eaa34bc" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.521757 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5273bd5b06ca0fcb9d894a79c052eb77b3d16cdf14641767419982525eaa34bc"} err="failed to get container status \"5273bd5b06ca0fcb9d894a79c052eb77b3d16cdf14641767419982525eaa34bc\": rpc error: code = NotFound desc = could not find container \"5273bd5b06ca0fcb9d894a79c052eb77b3d16cdf14641767419982525eaa34bc\": container with ID starting with 5273bd5b06ca0fcb9d894a79c052eb77b3d16cdf14641767419982525eaa34bc not found: ID does not exist" Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.534215 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5z9qb"] Nov 24 17:52:46 crc kubenswrapper[4768]: I1124 17:52:46.537894 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5z9qb"] Nov 24 17:52:47 crc kubenswrapper[4768]: I1124 17:52:47.468875 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gp2qx" event={"ID":"5fa06755-0386-4960-9adc-258106178fca","Type":"ContainerStarted","Data":"b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e"} Nov 24 17:52:47 crc kubenswrapper[4768]: I1124 17:52:47.473219 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zwr6" event={"ID":"81701703-9dca-4d65-a4b5-47c74ead9c5f","Type":"ContainerStarted","Data":"9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4"} Nov 24 17:52:47 crc kubenswrapper[4768]: I1124 17:52:47.476449 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjmmt" event={"ID":"21aa9a49-fa80-4c66-97bb-bcd28c31aaef","Type":"ContainerStarted","Data":"a39192990610a7d55212ea4fa15d5aa8ca69dbe380f351620a21cd70f067c952"} Nov 24 17:52:47 crc kubenswrapper[4768]: I1124 17:52:47.487800 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gp2qx" podStartSLOduration=2.387355134 podStartE2EDuration="58.48778389s" podCreationTimestamp="2025-11-24 17:51:49 +0000 UTC" firstStartedPulling="2025-11-24 17:51:50.73241336 +0000 UTC m=+149.592995137" lastFinishedPulling="2025-11-24 17:52:46.832842116 +0000 UTC m=+205.693423893" observedRunningTime="2025-11-24 17:52:47.487403375 +0000 UTC m=+206.347985162" watchObservedRunningTime="2025-11-24 17:52:47.48778389 +0000 UTC m=+206.348365667" Nov 24 17:52:47 crc kubenswrapper[4768]: I1124 17:52:47.522627 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7zwr6" podStartSLOduration=2.287233063 podStartE2EDuration="56.522608089s" podCreationTimestamp="2025-11-24 17:51:51 +0000 UTC" firstStartedPulling="2025-11-24 17:51:52.838670798 +0000 UTC m=+151.699252575" lastFinishedPulling="2025-11-24 17:52:47.074045824 +0000 UTC m=+205.934627601" observedRunningTime="2025-11-24 17:52:47.505855344 +0000 UTC m=+206.366437121" watchObservedRunningTime="2025-11-24 17:52:47.522608089 +0000 UTC m=+206.383189866" Nov 24 17:52:47 crc kubenswrapper[4768]: I1124 17:52:47.524657 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tjmmt" podStartSLOduration=3.474548523 podStartE2EDuration="57.524648505s" podCreationTimestamp="2025-11-24 17:51:50 +0000 UTC" firstStartedPulling="2025-11-24 17:51:52.83062481 +0000 UTC m=+151.691206587" lastFinishedPulling="2025-11-24 17:52:46.880724792 +0000 UTC m=+205.741306569" observedRunningTime="2025-11-24 17:52:47.524571522 +0000 UTC m=+206.385153309" watchObservedRunningTime="2025-11-24 17:52:47.524648505 +0000 UTC m=+206.385230282" Nov 24 17:52:47 crc kubenswrapper[4768]: I1124 17:52:47.905815 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" path="/var/lib/kubelet/pods/3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7/volumes" Nov 24 17:52:49 crc kubenswrapper[4768]: I1124 17:52:49.201302 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:52:49 crc kubenswrapper[4768]: I1124 17:52:49.268057 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:52:49 crc kubenswrapper[4768]: I1124 17:52:49.386877 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x8424" Nov 24 17:52:49 crc kubenswrapper[4768]: I1124 17:52:49.386959 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x8424" Nov 24 17:52:49 crc kubenswrapper[4768]: I1124 17:52:49.426823 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x8424" Nov 24 17:52:49 crc kubenswrapper[4768]: I1124 17:52:49.531836 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x8424" Nov 24 17:52:49 crc kubenswrapper[4768]: I1124 17:52:49.572872 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:52:49 crc kubenswrapper[4768]: I1124 17:52:49.748637 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:52:49 crc kubenswrapper[4768]: I1124 17:52:49.748689 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:52:49 crc kubenswrapper[4768]: I1124 17:52:49.793350 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:52:51 crc kubenswrapper[4768]: I1124 17:52:51.136774 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:52:51 crc kubenswrapper[4768]: I1124 17:52:51.137174 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:52:51 crc kubenswrapper[4768]: I1124 17:52:51.208604 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:52:51 crc kubenswrapper[4768]: I1124 17:52:51.560216 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:52:51 crc kubenswrapper[4768]: I1124 17:52:51.560627 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:52:51 crc kubenswrapper[4768]: I1124 17:52:51.600519 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:52:52 crc kubenswrapper[4768]: I1124 17:52:52.546410 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:52:52 crc kubenswrapper[4768]: I1124 17:52:52.731169 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2k6bn"] Nov 24 17:52:52 crc kubenswrapper[4768]: I1124 17:52:52.731400 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2k6bn" podUID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" containerName="registry-server" containerID="cri-o://3c2667a0b83cf48fb96398dce792c08c76eac0d61b1cdc2b318a7fa388b1391f" gracePeriod=2 Nov 24 17:52:53 crc kubenswrapper[4768]: I1124 17:52:53.514937 4768 generic.go:334] "Generic (PLEG): container finished" podID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" containerID="3c2667a0b83cf48fb96398dce792c08c76eac0d61b1cdc2b318a7fa388b1391f" exitCode=0 Nov 24 17:52:53 crc kubenswrapper[4768]: I1124 17:52:53.514985 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k6bn" event={"ID":"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9","Type":"ContainerDied","Data":"3c2667a0b83cf48fb96398dce792c08c76eac0d61b1cdc2b318a7fa388b1391f"} Nov 24 17:52:53 crc kubenswrapper[4768]: I1124 17:52:53.753603 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:52:53 crc kubenswrapper[4768]: I1124 17:52:53.827524 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2p4r\" (UniqueName: \"kubernetes.io/projected/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-kube-api-access-k2p4r\") pod \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\" (UID: \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\") " Nov 24 17:52:53 crc kubenswrapper[4768]: I1124 17:52:53.827979 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-utilities\") pod \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\" (UID: \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\") " Nov 24 17:52:53 crc kubenswrapper[4768]: I1124 17:52:53.828097 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-catalog-content\") pod \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\" (UID: \"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9\") " Nov 24 17:52:53 crc kubenswrapper[4768]: I1124 17:52:53.828823 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-utilities" (OuterVolumeSpecName: "utilities") pod "e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" (UID: "e7d38bf6-7bd5-468e-ac9d-508e8aea36b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:52:53 crc kubenswrapper[4768]: I1124 17:52:53.835419 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-kube-api-access-k2p4r" (OuterVolumeSpecName: "kube-api-access-k2p4r") pod "e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" (UID: "e7d38bf6-7bd5-468e-ac9d-508e8aea36b9"). InnerVolumeSpecName "kube-api-access-k2p4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:52:53 crc kubenswrapper[4768]: I1124 17:52:53.877270 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" (UID: "e7d38bf6-7bd5-468e-ac9d-508e8aea36b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:52:53 crc kubenswrapper[4768]: I1124 17:52:53.929108 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:52:53 crc kubenswrapper[4768]: I1124 17:52:53.929145 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:52:53 crc kubenswrapper[4768]: I1124 17:52:53.929159 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2p4r\" (UniqueName: \"kubernetes.io/projected/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9-kube-api-access-k2p4r\") on node \"crc\" DevicePath \"\"" Nov 24 17:52:54 crc kubenswrapper[4768]: I1124 17:52:54.524409 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2k6bn" event={"ID":"e7d38bf6-7bd5-468e-ac9d-508e8aea36b9","Type":"ContainerDied","Data":"d44cc620b6433feca96ebce38ec5f1cbef1a46f1a654b0422d90a4032e64ae1a"} Nov 24 17:52:54 crc kubenswrapper[4768]: I1124 17:52:54.524749 4768 scope.go:117] "RemoveContainer" containerID="3c2667a0b83cf48fb96398dce792c08c76eac0d61b1cdc2b318a7fa388b1391f" Nov 24 17:52:54 crc kubenswrapper[4768]: I1124 17:52:54.524466 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2k6bn" Nov 24 17:52:54 crc kubenswrapper[4768]: I1124 17:52:54.549544 4768 scope.go:117] "RemoveContainer" containerID="973775bc7e4cebe4ea01a062fea0d8478c3dae2a785f9064e2ae658a8eeeb38f" Nov 24 17:52:54 crc kubenswrapper[4768]: I1124 17:52:54.561676 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2k6bn"] Nov 24 17:52:54 crc kubenswrapper[4768]: I1124 17:52:54.567448 4768 scope.go:117] "RemoveContainer" containerID="7a6952937b93482b7af0a9d0277f879948014ea7096503e22ecbca06245ce51b" Nov 24 17:52:54 crc kubenswrapper[4768]: I1124 17:52:54.571532 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2k6bn"] Nov 24 17:52:55 crc kubenswrapper[4768]: I1124 17:52:55.133443 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zwr6"] Nov 24 17:52:55 crc kubenswrapper[4768]: I1124 17:52:55.530476 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7zwr6" podUID="81701703-9dca-4d65-a4b5-47c74ead9c5f" containerName="registry-server" containerID="cri-o://9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4" gracePeriod=2 Nov 24 17:52:55 crc kubenswrapper[4768]: I1124 17:52:55.907364 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" path="/var/lib/kubelet/pods/e7d38bf6-7bd5-468e-ac9d-508e8aea36b9/volumes" Nov 24 17:52:55 crc kubenswrapper[4768]: I1124 17:52:55.914245 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:52:55 crc kubenswrapper[4768]: I1124 17:52:55.953323 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnh98\" (UniqueName: \"kubernetes.io/projected/81701703-9dca-4d65-a4b5-47c74ead9c5f-kube-api-access-lnh98\") pod \"81701703-9dca-4d65-a4b5-47c74ead9c5f\" (UID: \"81701703-9dca-4d65-a4b5-47c74ead9c5f\") " Nov 24 17:52:55 crc kubenswrapper[4768]: I1124 17:52:55.953400 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81701703-9dca-4d65-a4b5-47c74ead9c5f-catalog-content\") pod \"81701703-9dca-4d65-a4b5-47c74ead9c5f\" (UID: \"81701703-9dca-4d65-a4b5-47c74ead9c5f\") " Nov 24 17:52:55 crc kubenswrapper[4768]: I1124 17:52:55.956679 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81701703-9dca-4d65-a4b5-47c74ead9c5f-utilities\") pod \"81701703-9dca-4d65-a4b5-47c74ead9c5f\" (UID: \"81701703-9dca-4d65-a4b5-47c74ead9c5f\") " Nov 24 17:52:55 crc kubenswrapper[4768]: I1124 17:52:55.957472 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81701703-9dca-4d65-a4b5-47c74ead9c5f-utilities" (OuterVolumeSpecName: "utilities") pod "81701703-9dca-4d65-a4b5-47c74ead9c5f" (UID: "81701703-9dca-4d65-a4b5-47c74ead9c5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:52:55 crc kubenswrapper[4768]: I1124 17:52:55.973615 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81701703-9dca-4d65-a4b5-47c74ead9c5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81701703-9dca-4d65-a4b5-47c74ead9c5f" (UID: "81701703-9dca-4d65-a4b5-47c74ead9c5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:52:55 crc kubenswrapper[4768]: I1124 17:52:55.973836 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81701703-9dca-4d65-a4b5-47c74ead9c5f-kube-api-access-lnh98" (OuterVolumeSpecName: "kube-api-access-lnh98") pod "81701703-9dca-4d65-a4b5-47c74ead9c5f" (UID: "81701703-9dca-4d65-a4b5-47c74ead9c5f"). InnerVolumeSpecName "kube-api-access-lnh98". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.057760 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81701703-9dca-4d65-a4b5-47c74ead9c5f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.057796 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnh98\" (UniqueName: \"kubernetes.io/projected/81701703-9dca-4d65-a4b5-47c74ead9c5f-kube-api-access-lnh98\") on node \"crc\" DevicePath \"\"" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.057807 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81701703-9dca-4d65-a4b5-47c74ead9c5f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.538673 4768 generic.go:334] "Generic (PLEG): container finished" podID="81701703-9dca-4d65-a4b5-47c74ead9c5f" containerID="9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4" exitCode=0 Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.538718 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zwr6" event={"ID":"81701703-9dca-4d65-a4b5-47c74ead9c5f","Type":"ContainerDied","Data":"9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4"} Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.538749 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zwr6" event={"ID":"81701703-9dca-4d65-a4b5-47c74ead9c5f","Type":"ContainerDied","Data":"3bdb81de892a4c279ca85cf52deeb7f7ed37dffc10bd310d4b4535975c785ada"} Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.538761 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zwr6" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.538769 4768 scope.go:117] "RemoveContainer" containerID="9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.557041 4768 scope.go:117] "RemoveContainer" containerID="59962bd71a9ececae41221e7d37a629bbd67c7de6f401c30483a5cd732c0e748" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.566167 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zwr6"] Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.569252 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zwr6"] Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.574232 4768 scope.go:117] "RemoveContainer" containerID="7a407368b4a9ea7c41a720463e3137d473a2e8da50b32888f508982aefe23a21" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.588364 4768 scope.go:117] "RemoveContainer" containerID="9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4" Nov 24 17:52:56 crc kubenswrapper[4768]: E1124 17:52:56.588929 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4\": container with ID starting with 9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4 not found: ID does not exist" containerID="9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.588987 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4"} err="failed to get container status \"9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4\": rpc error: code = NotFound desc = could not find container \"9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4\": container with ID starting with 9e918ffa1b9c0e17439b4a1137ba4407a1475d9669b2666812b40a204e64b1c4 not found: ID does not exist" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.589020 4768 scope.go:117] "RemoveContainer" containerID="59962bd71a9ececae41221e7d37a629bbd67c7de6f401c30483a5cd732c0e748" Nov 24 17:52:56 crc kubenswrapper[4768]: E1124 17:52:56.589340 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59962bd71a9ececae41221e7d37a629bbd67c7de6f401c30483a5cd732c0e748\": container with ID starting with 59962bd71a9ececae41221e7d37a629bbd67c7de6f401c30483a5cd732c0e748 not found: ID does not exist" containerID="59962bd71a9ececae41221e7d37a629bbd67c7de6f401c30483a5cd732c0e748" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.589372 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59962bd71a9ececae41221e7d37a629bbd67c7de6f401c30483a5cd732c0e748"} err="failed to get container status \"59962bd71a9ececae41221e7d37a629bbd67c7de6f401c30483a5cd732c0e748\": rpc error: code = NotFound desc = could not find container \"59962bd71a9ececae41221e7d37a629bbd67c7de6f401c30483a5cd732c0e748\": container with ID starting with 59962bd71a9ececae41221e7d37a629bbd67c7de6f401c30483a5cd732c0e748 not found: ID does not exist" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.589391 4768 scope.go:117] "RemoveContainer" containerID="7a407368b4a9ea7c41a720463e3137d473a2e8da50b32888f508982aefe23a21" Nov 24 17:52:56 crc kubenswrapper[4768]: E1124 17:52:56.590217 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a407368b4a9ea7c41a720463e3137d473a2e8da50b32888f508982aefe23a21\": container with ID starting with 7a407368b4a9ea7c41a720463e3137d473a2e8da50b32888f508982aefe23a21 not found: ID does not exist" containerID="7a407368b4a9ea7c41a720463e3137d473a2e8da50b32888f508982aefe23a21" Nov 24 17:52:56 crc kubenswrapper[4768]: I1124 17:52:56.590249 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a407368b4a9ea7c41a720463e3137d473a2e8da50b32888f508982aefe23a21"} err="failed to get container status \"7a407368b4a9ea7c41a720463e3137d473a2e8da50b32888f508982aefe23a21\": rpc error: code = NotFound desc = could not find container \"7a407368b4a9ea7c41a720463e3137d473a2e8da50b32888f508982aefe23a21\": container with ID starting with 7a407368b4a9ea7c41a720463e3137d473a2e8da50b32888f508982aefe23a21 not found: ID does not exist" Nov 24 17:52:57 crc kubenswrapper[4768]: I1124 17:52:57.907450 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81701703-9dca-4d65-a4b5-47c74ead9c5f" path="/var/lib/kubelet/pods/81701703-9dca-4d65-a4b5-47c74ead9c5f/volumes" Nov 24 17:52:59 crc kubenswrapper[4768]: I1124 17:52:59.819983 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:52:59 crc kubenswrapper[4768]: I1124 17:52:59.867204 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gp2qx"] Nov 24 17:53:00 crc kubenswrapper[4768]: I1124 17:53:00.561196 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gp2qx" podUID="5fa06755-0386-4960-9adc-258106178fca" containerName="registry-server" containerID="cri-o://b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e" gracePeriod=2 Nov 24 17:53:00 crc kubenswrapper[4768]: I1124 17:53:00.988016 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.020775 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fa06755-0386-4960-9adc-258106178fca-catalog-content\") pod \"5fa06755-0386-4960-9adc-258106178fca\" (UID: \"5fa06755-0386-4960-9adc-258106178fca\") " Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.020879 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlf7n\" (UniqueName: \"kubernetes.io/projected/5fa06755-0386-4960-9adc-258106178fca-kube-api-access-vlf7n\") pod \"5fa06755-0386-4960-9adc-258106178fca\" (UID: \"5fa06755-0386-4960-9adc-258106178fca\") " Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.020922 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fa06755-0386-4960-9adc-258106178fca-utilities\") pod \"5fa06755-0386-4960-9adc-258106178fca\" (UID: \"5fa06755-0386-4960-9adc-258106178fca\") " Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.022213 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fa06755-0386-4960-9adc-258106178fca-utilities" (OuterVolumeSpecName: "utilities") pod "5fa06755-0386-4960-9adc-258106178fca" (UID: "5fa06755-0386-4960-9adc-258106178fca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.026532 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fa06755-0386-4960-9adc-258106178fca-kube-api-access-vlf7n" (OuterVolumeSpecName: "kube-api-access-vlf7n") pod "5fa06755-0386-4960-9adc-258106178fca" (UID: "5fa06755-0386-4960-9adc-258106178fca"). InnerVolumeSpecName "kube-api-access-vlf7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.075918 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fa06755-0386-4960-9adc-258106178fca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fa06755-0386-4960-9adc-258106178fca" (UID: "5fa06755-0386-4960-9adc-258106178fca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.122929 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fa06755-0386-4960-9adc-258106178fca-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.122982 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlf7n\" (UniqueName: \"kubernetes.io/projected/5fa06755-0386-4960-9adc-258106178fca-kube-api-access-vlf7n\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.123001 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fa06755-0386-4960-9adc-258106178fca-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.181092 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.567577 4768 generic.go:334] "Generic (PLEG): container finished" podID="5fa06755-0386-4960-9adc-258106178fca" containerID="b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e" exitCode=0 Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.567622 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gp2qx" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.567625 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gp2qx" event={"ID":"5fa06755-0386-4960-9adc-258106178fca","Type":"ContainerDied","Data":"b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e"} Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.567654 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gp2qx" event={"ID":"5fa06755-0386-4960-9adc-258106178fca","Type":"ContainerDied","Data":"3337af008d629a63c3851d5733e8e5f5e408c656f09fe5a7246e39a4a2a167cc"} Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.567686 4768 scope.go:117] "RemoveContainer" containerID="b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.582401 4768 scope.go:117] "RemoveContainer" containerID="d7dc4fa5bbb79ac6f3ffb0f20895ca005e631cd9687b8e171530301ae0af7803" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.600102 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gp2qx"] Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.605283 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gp2qx"] Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.616029 4768 scope.go:117] "RemoveContainer" containerID="f60fa97ff1ecc3e1179455a07cd04ce395bfa652496378fa521e64225f4a9c90" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.628693 4768 scope.go:117] "RemoveContainer" containerID="b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e" Nov 24 17:53:01 crc kubenswrapper[4768]: E1124 17:53:01.629202 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e\": container with ID starting with b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e not found: ID does not exist" containerID="b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.629248 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e"} err="failed to get container status \"b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e\": rpc error: code = NotFound desc = could not find container \"b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e\": container with ID starting with b028c6e46ab87ae54e3c52431c8c758fb58888ee9d50a67581d4bdb7d961ee5e not found: ID does not exist" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.629277 4768 scope.go:117] "RemoveContainer" containerID="d7dc4fa5bbb79ac6f3ffb0f20895ca005e631cd9687b8e171530301ae0af7803" Nov 24 17:53:01 crc kubenswrapper[4768]: E1124 17:53:01.629600 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7dc4fa5bbb79ac6f3ffb0f20895ca005e631cd9687b8e171530301ae0af7803\": container with ID starting with d7dc4fa5bbb79ac6f3ffb0f20895ca005e631cd9687b8e171530301ae0af7803 not found: ID does not exist" containerID="d7dc4fa5bbb79ac6f3ffb0f20895ca005e631cd9687b8e171530301ae0af7803" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.629626 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7dc4fa5bbb79ac6f3ffb0f20895ca005e631cd9687b8e171530301ae0af7803"} err="failed to get container status \"d7dc4fa5bbb79ac6f3ffb0f20895ca005e631cd9687b8e171530301ae0af7803\": rpc error: code = NotFound desc = could not find container \"d7dc4fa5bbb79ac6f3ffb0f20895ca005e631cd9687b8e171530301ae0af7803\": container with ID starting with d7dc4fa5bbb79ac6f3ffb0f20895ca005e631cd9687b8e171530301ae0af7803 not found: ID does not exist" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.629651 4768 scope.go:117] "RemoveContainer" containerID="f60fa97ff1ecc3e1179455a07cd04ce395bfa652496378fa521e64225f4a9c90" Nov 24 17:53:01 crc kubenswrapper[4768]: E1124 17:53:01.629967 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60fa97ff1ecc3e1179455a07cd04ce395bfa652496378fa521e64225f4a9c90\": container with ID starting with f60fa97ff1ecc3e1179455a07cd04ce395bfa652496378fa521e64225f4a9c90 not found: ID does not exist" containerID="f60fa97ff1ecc3e1179455a07cd04ce395bfa652496378fa521e64225f4a9c90" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.630026 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60fa97ff1ecc3e1179455a07cd04ce395bfa652496378fa521e64225f4a9c90"} err="failed to get container status \"f60fa97ff1ecc3e1179455a07cd04ce395bfa652496378fa521e64225f4a9c90\": rpc error: code = NotFound desc = could not find container \"f60fa97ff1ecc3e1179455a07cd04ce395bfa652496378fa521e64225f4a9c90\": container with ID starting with f60fa97ff1ecc3e1179455a07cd04ce395bfa652496378fa521e64225f4a9c90 not found: ID does not exist" Nov 24 17:53:01 crc kubenswrapper[4768]: I1124 17:53:01.907327 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fa06755-0386-4960-9adc-258106178fca" path="/var/lib/kubelet/pods/5fa06755-0386-4960-9adc-258106178fca/volumes" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.354813 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" podUID="ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" containerName="oauth-openshift" containerID="cri-o://665d22d56488cb3101832f0b65ab3a83f58ddd283b03b8738c5430b85c610cb2" gracePeriod=15 Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.601397 4768 generic.go:334] "Generic (PLEG): container finished" podID="ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" containerID="665d22d56488cb3101832f0b65ab3a83f58ddd283b03b8738c5430b85c610cb2" exitCode=0 Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.601516 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" event={"ID":"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b","Type":"ContainerDied","Data":"665d22d56488cb3101832f0b65ab3a83f58ddd283b03b8738c5430b85c610cb2"} Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.686450 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.799422 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-serving-cert\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.799565 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-router-certs\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.799606 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-login\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.799647 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-ocp-branding-template\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.799707 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-error\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.799750 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-session\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.799788 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-cliconfig\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.799826 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-provider-selection\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.799866 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-service-ca\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.799897 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-audit-policies\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.799947 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-idp-0-file-data\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.800034 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-trusted-ca-bundle\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.800068 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-audit-dir\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.800108 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhnfb\" (UniqueName: \"kubernetes.io/projected/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-kube-api-access-zhnfb\") pod \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\" (UID: \"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b\") " Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.801446 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.801472 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.801700 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.802074 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.802484 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.807276 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-kube-api-access-zhnfb" (OuterVolumeSpecName: "kube-api-access-zhnfb") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "kube-api-access-zhnfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.808414 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.808984 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.809135 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.812134 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.815155 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.815299 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.815776 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.816122 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" (UID: "ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902241 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902290 4768 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902316 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhnfb\" (UniqueName: \"kubernetes.io/projected/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-kube-api-access-zhnfb\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902335 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902352 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902370 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902392 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902410 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902464 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902483 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902526 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902547 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902564 4768 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:06 crc kubenswrapper[4768]: I1124 17:53:06.902581 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:07 crc kubenswrapper[4768]: I1124 17:53:07.608831 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" event={"ID":"ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b","Type":"ContainerDied","Data":"a49a761cef391e672338ab62f6ddf2a71a96ec9f2567c0537c0111edeb83fce1"} Nov 24 17:53:07 crc kubenswrapper[4768]: I1124 17:53:07.608891 4768 scope.go:117] "RemoveContainer" containerID="665d22d56488cb3101832f0b65ab3a83f58ddd283b03b8738c5430b85c610cb2" Nov 24 17:53:07 crc kubenswrapper[4768]: I1124 17:53:07.608949 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-745nn" Nov 24 17:53:07 crc kubenswrapper[4768]: I1124 17:53:07.643391 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-745nn"] Nov 24 17:53:07 crc kubenswrapper[4768]: I1124 17:53:07.650077 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-745nn"] Nov 24 17:53:07 crc kubenswrapper[4768]: I1124 17:53:07.910993 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" path="/var/lib/kubelet/pods/ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b/volumes" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.284480 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-cc7989dc6-k9bsq"] Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285160 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" containerName="registry-server" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285173 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" containerName="registry-server" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285182 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" containerName="extract-content" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285187 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" containerName="extract-content" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285197 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81701703-9dca-4d65-a4b5-47c74ead9c5f" containerName="extract-content" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285202 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="81701703-9dca-4d65-a4b5-47c74ead9c5f" containerName="extract-content" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285211 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" containerName="extract-utilities" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285216 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" containerName="extract-utilities" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285226 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" containerName="registry-server" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285233 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" containerName="registry-server" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285242 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa06755-0386-4960-9adc-258106178fca" containerName="extract-content" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285250 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa06755-0386-4960-9adc-258106178fca" containerName="extract-content" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285263 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa06755-0386-4960-9adc-258106178fca" containerName="extract-utilities" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285271 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa06755-0386-4960-9adc-258106178fca" containerName="extract-utilities" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285281 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81701703-9dca-4d65-a4b5-47c74ead9c5f" containerName="registry-server" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285288 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="81701703-9dca-4d65-a4b5-47c74ead9c5f" containerName="registry-server" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285296 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" containerName="oauth-openshift" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285301 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" containerName="oauth-openshift" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285309 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" containerName="extract-utilities" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285314 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" containerName="extract-utilities" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285322 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="105926c2-41b3-4a78-a8a4-cdf09ac261dd" containerName="pruner" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285327 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="105926c2-41b3-4a78-a8a4-cdf09ac261dd" containerName="pruner" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285333 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81701703-9dca-4d65-a4b5-47c74ead9c5f" containerName="extract-utilities" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285338 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="81701703-9dca-4d65-a4b5-47c74ead9c5f" containerName="extract-utilities" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285346 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" containerName="extract-content" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285352 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" containerName="extract-content" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285360 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa06755-0386-4960-9adc-258106178fca" containerName="registry-server" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285365 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa06755-0386-4960-9adc-258106178fca" containerName="registry-server" Nov 24 17:53:16 crc kubenswrapper[4768]: E1124 17:53:16.285371 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c38c11-3667-4a14-82d1-c8dbabc968d2" containerName="pruner" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285377 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c38c11-3667-4a14-82d1-c8dbabc968d2" containerName="pruner" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285471 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="105926c2-41b3-4a78-a8a4-cdf09ac261dd" containerName="pruner" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285513 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c38c11-3667-4a14-82d1-c8dbabc968d2" containerName="pruner" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285523 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee20a778-e5b1-4d23-ab68-c2b5dfa0a11b" containerName="oauth-openshift" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285534 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d38bf6-7bd5-468e-ac9d-508e8aea36b9" containerName="registry-server" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285542 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa06755-0386-4960-9adc-258106178fca" containerName="registry-server" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285550 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="81701703-9dca-4d65-a4b5-47c74ead9c5f" containerName="registry-server" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285558 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f663eea-de7c-4f55-92e4-2ffbc5c6b5a7" containerName="registry-server" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.285897 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.289203 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.289212 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.289546 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.290929 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.290936 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.291816 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.291817 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.291971 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.292179 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.292188 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.292240 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.292563 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.298896 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.311210 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.313041 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.317159 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-cc7989dc6-k9bsq"] Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323119 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-session\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323174 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323198 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe6eed81-a494-4949-84ba-6236f3fc66cc-audit-dir\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323219 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323241 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323262 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323290 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323310 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-user-template-login\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323326 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe6eed81-a494-4949-84ba-6236f3fc66cc-audit-policies\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323346 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-router-certs\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323371 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-user-template-error\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323400 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmj2p\" (UniqueName: \"kubernetes.io/projected/fe6eed81-a494-4949-84ba-6236f3fc66cc-kube-api-access-xmj2p\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323421 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.323440 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-service-ca\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.424881 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.424943 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.424973 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-user-template-login\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.424996 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe6eed81-a494-4949-84ba-6236f3fc66cc-audit-policies\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.425016 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-router-certs\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.425058 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-user-template-error\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.425102 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmj2p\" (UniqueName: \"kubernetes.io/projected/fe6eed81-a494-4949-84ba-6236f3fc66cc-kube-api-access-xmj2p\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.425137 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.425160 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-service-ca\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.425193 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-session\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.425954 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-service-ca\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.426144 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.426198 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.426210 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe6eed81-a494-4949-84ba-6236f3fc66cc-audit-policies\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.426236 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe6eed81-a494-4949-84ba-6236f3fc66cc-audit-dir\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.426261 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.426308 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe6eed81-a494-4949-84ba-6236f3fc66cc-audit-dir\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.426340 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.426427 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.431965 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.432588 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-user-template-error\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.432602 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.433420 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.435333 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-router-certs\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.435346 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-user-template-login\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.435394 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.435861 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe6eed81-a494-4949-84ba-6236f3fc66cc-v4-0-config-system-session\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.443176 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmj2p\" (UniqueName: \"kubernetes.io/projected/fe6eed81-a494-4949-84ba-6236f3fc66cc-kube-api-access-xmj2p\") pod \"oauth-openshift-cc7989dc6-k9bsq\" (UID: \"fe6eed81-a494-4949-84ba-6236f3fc66cc\") " pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.612888 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:16 crc kubenswrapper[4768]: I1124 17:53:16.806550 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-cc7989dc6-k9bsq"] Nov 24 17:53:17 crc kubenswrapper[4768]: I1124 17:53:17.663680 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" event={"ID":"fe6eed81-a494-4949-84ba-6236f3fc66cc","Type":"ContainerStarted","Data":"b498c20ff8029a15a53aaec07a39a785bcf18360359cfdb22dd50987d660898d"} Nov 24 17:53:17 crc kubenswrapper[4768]: I1124 17:53:17.664230 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" event={"ID":"fe6eed81-a494-4949-84ba-6236f3fc66cc","Type":"ContainerStarted","Data":"0769f2b73f034037f4a3c086dbd0304d552faa06f5d70c4e0e4d5edf9f06d2c4"} Nov 24 17:53:17 crc kubenswrapper[4768]: I1124 17:53:17.664247 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:17 crc kubenswrapper[4768]: I1124 17:53:17.669624 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" Nov 24 17:53:17 crc kubenswrapper[4768]: I1124 17:53:17.691093 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-cc7989dc6-k9bsq" podStartSLOduration=36.691064583 podStartE2EDuration="36.691064583s" podCreationTimestamp="2025-11-24 17:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:53:17.691020272 +0000 UTC m=+236.551602079" watchObservedRunningTime="2025-11-24 17:53:17.691064583 +0000 UTC m=+236.551646380" Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.631752 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzqw9"] Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.632832 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vzqw9" podUID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" containerName="registry-server" containerID="cri-o://5165c75895d94d66551a3a764c7f2d987ff8c54d2dc4a35effcdf96ead4412bd" gracePeriod=30 Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.658889 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x8424"] Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.659117 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x8424" podUID="a22825e1-d87e-48cf-b169-7d1360923af4" containerName="registry-server" containerID="cri-o://b5f2cdb4a902253311831ffa20ba8ed2c1f5cc1b4cba151d29e6c7acfee8b219" gracePeriod=30 Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.668806 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6zm9x"] Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.669051 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" podUID="9b8d6985-79fe-4be9-a7e3-5c762214d678" containerName="marketplace-operator" containerID="cri-o://cfb967f735aa5e15d268e8ab56d2e2adb067db43ca98f86627c8c57c958d0835" gracePeriod=30 Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.678603 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjmmt"] Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.678926 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tjmmt" podUID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" containerName="registry-server" containerID="cri-o://a39192990610a7d55212ea4fa15d5aa8ca69dbe380f351620a21cd70f067c952" gracePeriod=30 Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.688424 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cnkqp"] Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.688736 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cnkqp" podUID="7ef31d38-da28-4060-b917-2b2488e14067" containerName="registry-server" containerID="cri-o://c4e7b35da1dea43059fa4db6d4567f066e9b4405a4488b168be0502c5d564db9" gracePeriod=30 Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.690124 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vtrzd"] Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.694793 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.707356 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vtrzd"] Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.832690 4768 generic.go:334] "Generic (PLEG): container finished" podID="9b8d6985-79fe-4be9-a7e3-5c762214d678" containerID="cfb967f735aa5e15d268e8ab56d2e2adb067db43ca98f86627c8c57c958d0835" exitCode=0 Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.832808 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" event={"ID":"9b8d6985-79fe-4be9-a7e3-5c762214d678","Type":"ContainerDied","Data":"cfb967f735aa5e15d268e8ab56d2e2adb067db43ca98f86627c8c57c958d0835"} Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.839793 4768 generic.go:334] "Generic (PLEG): container finished" podID="7ef31d38-da28-4060-b917-2b2488e14067" containerID="c4e7b35da1dea43059fa4db6d4567f066e9b4405a4488b168be0502c5d564db9" exitCode=0 Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.839891 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkqp" event={"ID":"7ef31d38-da28-4060-b917-2b2488e14067","Type":"ContainerDied","Data":"c4e7b35da1dea43059fa4db6d4567f066e9b4405a4488b168be0502c5d564db9"} Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.846972 4768 generic.go:334] "Generic (PLEG): container finished" podID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" containerID="a39192990610a7d55212ea4fa15d5aa8ca69dbe380f351620a21cd70f067c952" exitCode=0 Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.847073 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjmmt" event={"ID":"21aa9a49-fa80-4c66-97bb-bcd28c31aaef","Type":"ContainerDied","Data":"a39192990610a7d55212ea4fa15d5aa8ca69dbe380f351620a21cd70f067c952"} Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.853071 4768 generic.go:334] "Generic (PLEG): container finished" podID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" containerID="5165c75895d94d66551a3a764c7f2d987ff8c54d2dc4a35effcdf96ead4412bd" exitCode=0 Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.853154 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzqw9" event={"ID":"17fb1883-b4da-4e64-b27a-fdf11ff21ac2","Type":"ContainerDied","Data":"5165c75895d94d66551a3a764c7f2d987ff8c54d2dc4a35effcdf96ead4412bd"} Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.860233 4768 generic.go:334] "Generic (PLEG): container finished" podID="a22825e1-d87e-48cf-b169-7d1360923af4" containerID="b5f2cdb4a902253311831ffa20ba8ed2c1f5cc1b4cba151d29e6c7acfee8b219" exitCode=0 Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.860309 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8424" event={"ID":"a22825e1-d87e-48cf-b169-7d1360923af4","Type":"ContainerDied","Data":"b5f2cdb4a902253311831ffa20ba8ed2c1f5cc1b4cba151d29e6c7acfee8b219"} Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.891809 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d17e8f38-c1cf-4774-ad10-d2e08512c158-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vtrzd\" (UID: \"d17e8f38-c1cf-4774-ad10-d2e08512c158\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.891860 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lz52\" (UniqueName: \"kubernetes.io/projected/d17e8f38-c1cf-4774-ad10-d2e08512c158-kube-api-access-4lz52\") pod \"marketplace-operator-79b997595-vtrzd\" (UID: \"d17e8f38-c1cf-4774-ad10-d2e08512c158\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.891915 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d17e8f38-c1cf-4774-ad10-d2e08512c158-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vtrzd\" (UID: \"d17e8f38-c1cf-4774-ad10-d2e08512c158\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.993379 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d17e8f38-c1cf-4774-ad10-d2e08512c158-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vtrzd\" (UID: \"d17e8f38-c1cf-4774-ad10-d2e08512c158\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.993856 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lz52\" (UniqueName: \"kubernetes.io/projected/d17e8f38-c1cf-4774-ad10-d2e08512c158-kube-api-access-4lz52\") pod \"marketplace-operator-79b997595-vtrzd\" (UID: \"d17e8f38-c1cf-4774-ad10-d2e08512c158\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.993908 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d17e8f38-c1cf-4774-ad10-d2e08512c158-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vtrzd\" (UID: \"d17e8f38-c1cf-4774-ad10-d2e08512c158\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:45 crc kubenswrapper[4768]: I1124 17:53:45.995527 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d17e8f38-c1cf-4774-ad10-d2e08512c158-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vtrzd\" (UID: \"d17e8f38-c1cf-4774-ad10-d2e08512c158\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.004220 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d17e8f38-c1cf-4774-ad10-d2e08512c158-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vtrzd\" (UID: \"d17e8f38-c1cf-4774-ad10-d2e08512c158\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.018964 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lz52\" (UniqueName: \"kubernetes.io/projected/d17e8f38-c1cf-4774-ad10-d2e08512c158-kube-api-access-4lz52\") pod \"marketplace-operator-79b997595-vtrzd\" (UID: \"d17e8f38-c1cf-4774-ad10-d2e08512c158\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.118123 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.167468 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.180180 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.183025 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.197791 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8424" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199106 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-949ms\" (UniqueName: \"kubernetes.io/projected/7ef31d38-da28-4060-b917-2b2488e14067-kube-api-access-949ms\") pod \"7ef31d38-da28-4060-b917-2b2488e14067\" (UID: \"7ef31d38-da28-4060-b917-2b2488e14067\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199164 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnw8b\" (UniqueName: \"kubernetes.io/projected/9b8d6985-79fe-4be9-a7e3-5c762214d678-kube-api-access-xnw8b\") pod \"9b8d6985-79fe-4be9-a7e3-5c762214d678\" (UID: \"9b8d6985-79fe-4be9-a7e3-5c762214d678\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199192 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b8d6985-79fe-4be9-a7e3-5c762214d678-marketplace-operator-metrics\") pod \"9b8d6985-79fe-4be9-a7e3-5c762214d678\" (UID: \"9b8d6985-79fe-4be9-a7e3-5c762214d678\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199221 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ef31d38-da28-4060-b917-2b2488e14067-catalog-content\") pod \"7ef31d38-da28-4060-b917-2b2488e14067\" (UID: \"7ef31d38-da28-4060-b917-2b2488e14067\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199263 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5z6d\" (UniqueName: \"kubernetes.io/projected/a22825e1-d87e-48cf-b169-7d1360923af4-kube-api-access-j5z6d\") pod \"a22825e1-d87e-48cf-b169-7d1360923af4\" (UID: \"a22825e1-d87e-48cf-b169-7d1360923af4\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199320 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8vbg\" (UniqueName: \"kubernetes.io/projected/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-kube-api-access-b8vbg\") pod \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\" (UID: \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199351 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ef31d38-da28-4060-b917-2b2488e14067-utilities\") pod \"7ef31d38-da28-4060-b917-2b2488e14067\" (UID: \"7ef31d38-da28-4060-b917-2b2488e14067\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199372 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b8d6985-79fe-4be9-a7e3-5c762214d678-marketplace-trusted-ca\") pod \"9b8d6985-79fe-4be9-a7e3-5c762214d678\" (UID: \"9b8d6985-79fe-4be9-a7e3-5c762214d678\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199395 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-utilities\") pod \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\" (UID: \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199412 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a22825e1-d87e-48cf-b169-7d1360923af4-utilities\") pod \"a22825e1-d87e-48cf-b169-7d1360923af4\" (UID: \"a22825e1-d87e-48cf-b169-7d1360923af4\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199447 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a22825e1-d87e-48cf-b169-7d1360923af4-catalog-content\") pod \"a22825e1-d87e-48cf-b169-7d1360923af4\" (UID: \"a22825e1-d87e-48cf-b169-7d1360923af4\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199464 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-utilities\") pod \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\" (UID: \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199484 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-catalog-content\") pod \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\" (UID: \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199515 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgcqw\" (UniqueName: \"kubernetes.io/projected/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-kube-api-access-jgcqw\") pod \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\" (UID: \"17fb1883-b4da-4e64-b27a-fdf11ff21ac2\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.199544 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-catalog-content\") pod \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\" (UID: \"21aa9a49-fa80-4c66-97bb-bcd28c31aaef\") " Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.200420 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-utilities" (OuterVolumeSpecName: "utilities") pod "21aa9a49-fa80-4c66-97bb-bcd28c31aaef" (UID: "21aa9a49-fa80-4c66-97bb-bcd28c31aaef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.200592 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-utilities" (OuterVolumeSpecName: "utilities") pod "17fb1883-b4da-4e64-b27a-fdf11ff21ac2" (UID: "17fb1883-b4da-4e64-b27a-fdf11ff21ac2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.201642 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a22825e1-d87e-48cf-b169-7d1360923af4-utilities" (OuterVolumeSpecName: "utilities") pod "a22825e1-d87e-48cf-b169-7d1360923af4" (UID: "a22825e1-d87e-48cf-b169-7d1360923af4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.201858 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b8d6985-79fe-4be9-a7e3-5c762214d678-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9b8d6985-79fe-4be9-a7e3-5c762214d678" (UID: "9b8d6985-79fe-4be9-a7e3-5c762214d678"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.202222 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ef31d38-da28-4060-b917-2b2488e14067-utilities" (OuterVolumeSpecName: "utilities") pod "7ef31d38-da28-4060-b917-2b2488e14067" (UID: "7ef31d38-da28-4060-b917-2b2488e14067"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.202523 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ef31d38-da28-4060-b917-2b2488e14067-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.202548 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b8d6985-79fe-4be9-a7e3-5c762214d678-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.202558 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.202567 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a22825e1-d87e-48cf-b169-7d1360923af4-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.202577 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.207878 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef31d38-da28-4060-b917-2b2488e14067-kube-api-access-949ms" (OuterVolumeSpecName: "kube-api-access-949ms") pod "7ef31d38-da28-4060-b917-2b2488e14067" (UID: "7ef31d38-da28-4060-b917-2b2488e14067"). InnerVolumeSpecName "kube-api-access-949ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.211561 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-kube-api-access-b8vbg" (OuterVolumeSpecName: "kube-api-access-b8vbg") pod "21aa9a49-fa80-4c66-97bb-bcd28c31aaef" (UID: "21aa9a49-fa80-4c66-97bb-bcd28c31aaef"). InnerVolumeSpecName "kube-api-access-b8vbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.212923 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8d6985-79fe-4be9-a7e3-5c762214d678-kube-api-access-xnw8b" (OuterVolumeSpecName: "kube-api-access-xnw8b") pod "9b8d6985-79fe-4be9-a7e3-5c762214d678" (UID: "9b8d6985-79fe-4be9-a7e3-5c762214d678"). InnerVolumeSpecName "kube-api-access-xnw8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.215002 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a22825e1-d87e-48cf-b169-7d1360923af4-kube-api-access-j5z6d" (OuterVolumeSpecName: "kube-api-access-j5z6d") pod "a22825e1-d87e-48cf-b169-7d1360923af4" (UID: "a22825e1-d87e-48cf-b169-7d1360923af4"). InnerVolumeSpecName "kube-api-access-j5z6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.215423 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-kube-api-access-jgcqw" (OuterVolumeSpecName: "kube-api-access-jgcqw") pod "17fb1883-b4da-4e64-b27a-fdf11ff21ac2" (UID: "17fb1883-b4da-4e64-b27a-fdf11ff21ac2"). InnerVolumeSpecName "kube-api-access-jgcqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.215745 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8d6985-79fe-4be9-a7e3-5c762214d678-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9b8d6985-79fe-4be9-a7e3-5c762214d678" (UID: "9b8d6985-79fe-4be9-a7e3-5c762214d678"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.248142 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21aa9a49-fa80-4c66-97bb-bcd28c31aaef" (UID: "21aa9a49-fa80-4c66-97bb-bcd28c31aaef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.268574 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a22825e1-d87e-48cf-b169-7d1360923af4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a22825e1-d87e-48cf-b169-7d1360923af4" (UID: "a22825e1-d87e-48cf-b169-7d1360923af4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.280636 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17fb1883-b4da-4e64-b27a-fdf11ff21ac2" (UID: "17fb1883-b4da-4e64-b27a-fdf11ff21ac2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.304027 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a22825e1-d87e-48cf-b169-7d1360923af4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.304053 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.304064 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgcqw\" (UniqueName: \"kubernetes.io/projected/17fb1883-b4da-4e64-b27a-fdf11ff21ac2-kube-api-access-jgcqw\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.304081 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.304091 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-949ms\" (UniqueName: \"kubernetes.io/projected/7ef31d38-da28-4060-b917-2b2488e14067-kube-api-access-949ms\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.304101 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnw8b\" (UniqueName: \"kubernetes.io/projected/9b8d6985-79fe-4be9-a7e3-5c762214d678-kube-api-access-xnw8b\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.304110 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9b8d6985-79fe-4be9-a7e3-5c762214d678-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.304121 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5z6d\" (UniqueName: \"kubernetes.io/projected/a22825e1-d87e-48cf-b169-7d1360923af4-kube-api-access-j5z6d\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.304129 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8vbg\" (UniqueName: \"kubernetes.io/projected/21aa9a49-fa80-4c66-97bb-bcd28c31aaef-kube-api-access-b8vbg\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.313975 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.332988 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ef31d38-da28-4060-b917-2b2488e14067-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ef31d38-da28-4060-b917-2b2488e14067" (UID: "7ef31d38-da28-4060-b917-2b2488e14067"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.406039 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ef31d38-da28-4060-b917-2b2488e14067-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.486573 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vtrzd"] Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.866751 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8424" event={"ID":"a22825e1-d87e-48cf-b169-7d1360923af4","Type":"ContainerDied","Data":"126d0bb6f2856b6e90a98922a8bfde18bb83ae83aaf34175baf63dd2dd569d27"} Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.866787 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8424" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.866815 4768 scope.go:117] "RemoveContainer" containerID="b5f2cdb4a902253311831ffa20ba8ed2c1f5cc1b4cba151d29e6c7acfee8b219" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.868021 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" event={"ID":"d17e8f38-c1cf-4774-ad10-d2e08512c158","Type":"ContainerStarted","Data":"78ab549994d706aef20414dc17eca4f4cbb1c3ff2fc89105e2e308d414700cc0"} Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.868058 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" event={"ID":"d17e8f38-c1cf-4774-ad10-d2e08512c158","Type":"ContainerStarted","Data":"68650c957c13d4a23af78c75948bed7f2a12a6feaf99f6f7a7e76a29427858c1"} Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.868075 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.870709 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" event={"ID":"9b8d6985-79fe-4be9-a7e3-5c762214d678","Type":"ContainerDied","Data":"aab1be0345c20098b90015c53a3eb20c79726a92eed6aa8f33ba121e023b5209"} Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.870795 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6zm9x" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.873251 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkqp" event={"ID":"7ef31d38-da28-4060-b917-2b2488e14067","Type":"ContainerDied","Data":"c4002352a1b1d4595ef7aa71318e95a6d49e4ed9566ac3a09be5882b14d885c2"} Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.873292 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnkqp" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.873316 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.874837 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjmmt" event={"ID":"21aa9a49-fa80-4c66-97bb-bcd28c31aaef","Type":"ContainerDied","Data":"9361c2dc7af9b1ce4195b740a8683eb9b4bac9e7f67fcddc8b4d6b9dd01f9e2e"} Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.874921 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tjmmt" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.887278 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzqw9" event={"ID":"17fb1883-b4da-4e64-b27a-fdf11ff21ac2","Type":"ContainerDied","Data":"7c6b3dd40f795577fdbe4b0a0dcdc2c441cfc0fd184aa97e337f35d390204733"} Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.887431 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzqw9" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.888256 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vtrzd" podStartSLOduration=1.888233029 podStartE2EDuration="1.888233029s" podCreationTimestamp="2025-11-24 17:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:53:46.885300376 +0000 UTC m=+265.745882173" watchObservedRunningTime="2025-11-24 17:53:46.888233029 +0000 UTC m=+265.748814806" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.917172 4768 scope.go:117] "RemoveContainer" containerID="ea8f31b8bba31a3efb38f395734c7ace91156b704587bbef6b0df43ae7af57e2" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.934897 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x8424"] Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.949443 4768 scope.go:117] "RemoveContainer" containerID="bf170d45e1f1181d8373d47566b043b3f77f9d8e4582b89e013230f3224948e0" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.950974 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x8424"] Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.980781 4768 scope.go:117] "RemoveContainer" containerID="cfb967f735aa5e15d268e8ab56d2e2adb067db43ca98f86627c8c57c958d0835" Nov 24 17:53:46 crc kubenswrapper[4768]: I1124 17:53:46.995332 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjmmt"] Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.004316 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjmmt"] Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.005676 4768 scope.go:117] "RemoveContainer" containerID="c4e7b35da1dea43059fa4db6d4567f066e9b4405a4488b168be0502c5d564db9" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.006397 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6zm9x"] Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.009073 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6zm9x"] Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.015045 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzqw9"] Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.019588 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vzqw9"] Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.024562 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cnkqp"] Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.025051 4768 scope.go:117] "RemoveContainer" containerID="b355b66edffc87f4b8caf11b436ad330a4c0326c0510effb8667cae92b7eb06d" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.027864 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cnkqp"] Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.043991 4768 scope.go:117] "RemoveContainer" containerID="8612189cf5769aa4220ab8ea460bf2378e6710f77a479ac3e9048eec795062f2" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.057081 4768 scope.go:117] "RemoveContainer" containerID="a39192990610a7d55212ea4fa15d5aa8ca69dbe380f351620a21cd70f067c952" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.070107 4768 scope.go:117] "RemoveContainer" containerID="830404e2718344d36d4ccb6d59351c8226cbb776b3c2c41c96ed7e1d6fd352e7" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.082741 4768 scope.go:117] "RemoveContainer" containerID="691a97bfc5ba991cd0822c00b217b9ce229ac3e09cdd462e018cae0171fb0dfe" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.094617 4768 scope.go:117] "RemoveContainer" containerID="5165c75895d94d66551a3a764c7f2d987ff8c54d2dc4a35effcdf96ead4412bd" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.104601 4768 scope.go:117] "RemoveContainer" containerID="68069a32fdde0b85202576dc0fb9895d8d97fdfaf9a5ce8ebc64a29ce3f5ff64" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.115351 4768 scope.go:117] "RemoveContainer" containerID="5ba56e8f90f48818c07dfbcb3fa837863db180a44ba620dd9a704b2cb4b08565" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.904875 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" path="/var/lib/kubelet/pods/17fb1883-b4da-4e64-b27a-fdf11ff21ac2/volumes" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.906695 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" path="/var/lib/kubelet/pods/21aa9a49-fa80-4c66-97bb-bcd28c31aaef/volumes" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.907458 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ef31d38-da28-4060-b917-2b2488e14067" path="/var/lib/kubelet/pods/7ef31d38-da28-4060-b917-2b2488e14067/volumes" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.909056 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b8d6985-79fe-4be9-a7e3-5c762214d678" path="/var/lib/kubelet/pods/9b8d6985-79fe-4be9-a7e3-5c762214d678/volumes" Nov 24 17:53:47 crc kubenswrapper[4768]: I1124 17:53:47.909790 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a22825e1-d87e-48cf-b169-7d1360923af4" path="/var/lib/kubelet/pods/a22825e1-d87e-48cf-b169-7d1360923af4/volumes" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.650569 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qzlbb"] Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651032 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef31d38-da28-4060-b917-2b2488e14067" containerName="extract-utilities" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651045 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef31d38-da28-4060-b917-2b2488e14067" containerName="extract-utilities" Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651053 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22825e1-d87e-48cf-b169-7d1360923af4" containerName="extract-utilities" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651059 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22825e1-d87e-48cf-b169-7d1360923af4" containerName="extract-utilities" Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651071 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" containerName="registry-server" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651078 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" containerName="registry-server" Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651088 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22825e1-d87e-48cf-b169-7d1360923af4" containerName="registry-server" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651094 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22825e1-d87e-48cf-b169-7d1360923af4" containerName="registry-server" Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651102 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" containerName="extract-utilities" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651107 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" containerName="extract-utilities" Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651114 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" containerName="extract-content" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651120 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" containerName="extract-content" Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651128 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" containerName="registry-server" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651133 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" containerName="registry-server" Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651142 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" containerName="extract-content" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651148 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" containerName="extract-content" Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651155 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef31d38-da28-4060-b917-2b2488e14067" containerName="extract-content" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651162 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef31d38-da28-4060-b917-2b2488e14067" containerName="extract-content" Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651174 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22825e1-d87e-48cf-b169-7d1360923af4" containerName="extract-content" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651180 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22825e1-d87e-48cf-b169-7d1360923af4" containerName="extract-content" Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651186 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" containerName="extract-utilities" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651192 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" containerName="extract-utilities" Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651200 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8d6985-79fe-4be9-a7e3-5c762214d678" containerName="marketplace-operator" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651211 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8d6985-79fe-4be9-a7e3-5c762214d678" containerName="marketplace-operator" Nov 24 17:53:48 crc kubenswrapper[4768]: E1124 17:53:48.651255 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef31d38-da28-4060-b917-2b2488e14067" containerName="registry-server" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651260 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef31d38-da28-4060-b917-2b2488e14067" containerName="registry-server" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651359 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8d6985-79fe-4be9-a7e3-5c762214d678" containerName="marketplace-operator" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651370 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a22825e1-d87e-48cf-b169-7d1360923af4" containerName="registry-server" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651380 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="21aa9a49-fa80-4c66-97bb-bcd28c31aaef" containerName="registry-server" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651389 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef31d38-da28-4060-b917-2b2488e14067" containerName="registry-server" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.651400 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17fb1883-b4da-4e64-b27a-fdf11ff21ac2" containerName="registry-server" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.652154 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.654339 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.663046 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qzlbb"] Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.842387 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qhcw\" (UniqueName: \"kubernetes.io/projected/66dab92d-4fda-4b03-82a4-9ceb5638b114-kube-api-access-6qhcw\") pod \"redhat-marketplace-qzlbb\" (UID: \"66dab92d-4fda-4b03-82a4-9ceb5638b114\") " pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.842461 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66dab92d-4fda-4b03-82a4-9ceb5638b114-utilities\") pod \"redhat-marketplace-qzlbb\" (UID: \"66dab92d-4fda-4b03-82a4-9ceb5638b114\") " pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.842543 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66dab92d-4fda-4b03-82a4-9ceb5638b114-catalog-content\") pod \"redhat-marketplace-qzlbb\" (UID: \"66dab92d-4fda-4b03-82a4-9ceb5638b114\") " pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.943802 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66dab92d-4fda-4b03-82a4-9ceb5638b114-utilities\") pod \"redhat-marketplace-qzlbb\" (UID: \"66dab92d-4fda-4b03-82a4-9ceb5638b114\") " pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.944662 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66dab92d-4fda-4b03-82a4-9ceb5638b114-utilities\") pod \"redhat-marketplace-qzlbb\" (UID: \"66dab92d-4fda-4b03-82a4-9ceb5638b114\") " pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.944931 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66dab92d-4fda-4b03-82a4-9ceb5638b114-catalog-content\") pod \"redhat-marketplace-qzlbb\" (UID: \"66dab92d-4fda-4b03-82a4-9ceb5638b114\") " pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.945064 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qhcw\" (UniqueName: \"kubernetes.io/projected/66dab92d-4fda-4b03-82a4-9ceb5638b114-kube-api-access-6qhcw\") pod \"redhat-marketplace-qzlbb\" (UID: \"66dab92d-4fda-4b03-82a4-9ceb5638b114\") " pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.945411 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66dab92d-4fda-4b03-82a4-9ceb5638b114-catalog-content\") pod \"redhat-marketplace-qzlbb\" (UID: \"66dab92d-4fda-4b03-82a4-9ceb5638b114\") " pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.965595 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qhcw\" (UniqueName: \"kubernetes.io/projected/66dab92d-4fda-4b03-82a4-9ceb5638b114-kube-api-access-6qhcw\") pod \"redhat-marketplace-qzlbb\" (UID: \"66dab92d-4fda-4b03-82a4-9ceb5638b114\") " pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:48 crc kubenswrapper[4768]: I1124 17:53:48.969213 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.256538 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cd76t"] Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.259903 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.263066 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.275343 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cd76t"] Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.356809 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ebedabf-6ef4-463c-98d0-d2afea402f61-catalog-content\") pod \"redhat-operators-cd76t\" (UID: \"8ebedabf-6ef4-463c-98d0-d2afea402f61\") " pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.356894 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7br\" (UniqueName: \"kubernetes.io/projected/8ebedabf-6ef4-463c-98d0-d2afea402f61-kube-api-access-pv7br\") pod \"redhat-operators-cd76t\" (UID: \"8ebedabf-6ef4-463c-98d0-d2afea402f61\") " pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.357264 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ebedabf-6ef4-463c-98d0-d2afea402f61-utilities\") pod \"redhat-operators-cd76t\" (UID: \"8ebedabf-6ef4-463c-98d0-d2afea402f61\") " pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.374712 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qzlbb"] Nov 24 17:53:49 crc kubenswrapper[4768]: W1124 17:53:49.389474 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66dab92d_4fda_4b03_82a4_9ceb5638b114.slice/crio-271635d915160ac59eee74ad4698dabedcf61bd4739133eb408b4b304cf87dc6 WatchSource:0}: Error finding container 271635d915160ac59eee74ad4698dabedcf61bd4739133eb408b4b304cf87dc6: Status 404 returned error can't find the container with id 271635d915160ac59eee74ad4698dabedcf61bd4739133eb408b4b304cf87dc6 Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.458992 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ebedabf-6ef4-463c-98d0-d2afea402f61-catalog-content\") pod \"redhat-operators-cd76t\" (UID: \"8ebedabf-6ef4-463c-98d0-d2afea402f61\") " pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.459088 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv7br\" (UniqueName: \"kubernetes.io/projected/8ebedabf-6ef4-463c-98d0-d2afea402f61-kube-api-access-pv7br\") pod \"redhat-operators-cd76t\" (UID: \"8ebedabf-6ef4-463c-98d0-d2afea402f61\") " pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.459192 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ebedabf-6ef4-463c-98d0-d2afea402f61-utilities\") pod \"redhat-operators-cd76t\" (UID: \"8ebedabf-6ef4-463c-98d0-d2afea402f61\") " pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.459570 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ebedabf-6ef4-463c-98d0-d2afea402f61-catalog-content\") pod \"redhat-operators-cd76t\" (UID: \"8ebedabf-6ef4-463c-98d0-d2afea402f61\") " pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.459701 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ebedabf-6ef4-463c-98d0-d2afea402f61-utilities\") pod \"redhat-operators-cd76t\" (UID: \"8ebedabf-6ef4-463c-98d0-d2afea402f61\") " pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.482662 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv7br\" (UniqueName: \"kubernetes.io/projected/8ebedabf-6ef4-463c-98d0-d2afea402f61-kube-api-access-pv7br\") pod \"redhat-operators-cd76t\" (UID: \"8ebedabf-6ef4-463c-98d0-d2afea402f61\") " pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.588016 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.915910 4768 generic.go:334] "Generic (PLEG): container finished" podID="66dab92d-4fda-4b03-82a4-9ceb5638b114" containerID="2f40626fae7222e49c96c368afd111382902d68926014faf360c39e385ac645d" exitCode=0 Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.915955 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzlbb" event={"ID":"66dab92d-4fda-4b03-82a4-9ceb5638b114","Type":"ContainerDied","Data":"2f40626fae7222e49c96c368afd111382902d68926014faf360c39e385ac645d"} Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.915983 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzlbb" event={"ID":"66dab92d-4fda-4b03-82a4-9ceb5638b114","Type":"ContainerStarted","Data":"271635d915160ac59eee74ad4698dabedcf61bd4739133eb408b4b304cf87dc6"} Nov 24 17:53:49 crc kubenswrapper[4768]: W1124 17:53:49.983677 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ebedabf_6ef4_463c_98d0_d2afea402f61.slice/crio-d9fce55144a1cd9130935b3075c55ec127f78561fa3327ac66a5e0c382c07901 WatchSource:0}: Error finding container d9fce55144a1cd9130935b3075c55ec127f78561fa3327ac66a5e0c382c07901: Status 404 returned error can't find the container with id d9fce55144a1cd9130935b3075c55ec127f78561fa3327ac66a5e0c382c07901 Nov 24 17:53:49 crc kubenswrapper[4768]: I1124 17:53:49.986783 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cd76t"] Nov 24 17:53:50 crc kubenswrapper[4768]: I1124 17:53:50.926363 4768 generic.go:334] "Generic (PLEG): container finished" podID="8ebedabf-6ef4-463c-98d0-d2afea402f61" containerID="24fe119e53c0519a15f09a8feac94de30b25783a40df6478c2ce163e160c8030" exitCode=0 Nov 24 17:53:50 crc kubenswrapper[4768]: I1124 17:53:50.926537 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd76t" event={"ID":"8ebedabf-6ef4-463c-98d0-d2afea402f61","Type":"ContainerDied","Data":"24fe119e53c0519a15f09a8feac94de30b25783a40df6478c2ce163e160c8030"} Nov 24 17:53:50 crc kubenswrapper[4768]: I1124 17:53:50.927224 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd76t" event={"ID":"8ebedabf-6ef4-463c-98d0-d2afea402f61","Type":"ContainerStarted","Data":"d9fce55144a1cd9130935b3075c55ec127f78561fa3327ac66a5e0c382c07901"} Nov 24 17:53:50 crc kubenswrapper[4768]: I1124 17:53:50.931398 4768 generic.go:334] "Generic (PLEG): container finished" podID="66dab92d-4fda-4b03-82a4-9ceb5638b114" containerID="727ed8a753890f33d2dee03f4ec34d219c15eb908a84527247c7c0c3cbcaed43" exitCode=0 Nov 24 17:53:50 crc kubenswrapper[4768]: I1124 17:53:50.931447 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzlbb" event={"ID":"66dab92d-4fda-4b03-82a4-9ceb5638b114","Type":"ContainerDied","Data":"727ed8a753890f33d2dee03f4ec34d219c15eb908a84527247c7c0c3cbcaed43"} Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.053703 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8n975"] Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.055566 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n975" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.058886 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.064226 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8n975"] Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.076458 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-468bs\" (UniqueName: \"kubernetes.io/projected/e5b8263d-5b26-40f8-a344-761b9d19d252-kube-api-access-468bs\") pod \"community-operators-8n975\" (UID: \"e5b8263d-5b26-40f8-a344-761b9d19d252\") " pod="openshift-marketplace/community-operators-8n975" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.076559 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b8263d-5b26-40f8-a344-761b9d19d252-utilities\") pod \"community-operators-8n975\" (UID: \"e5b8263d-5b26-40f8-a344-761b9d19d252\") " pod="openshift-marketplace/community-operators-8n975" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.076636 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b8263d-5b26-40f8-a344-761b9d19d252-catalog-content\") pod \"community-operators-8n975\" (UID: \"e5b8263d-5b26-40f8-a344-761b9d19d252\") " pod="openshift-marketplace/community-operators-8n975" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.177709 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b8263d-5b26-40f8-a344-761b9d19d252-catalog-content\") pod \"community-operators-8n975\" (UID: \"e5b8263d-5b26-40f8-a344-761b9d19d252\") " pod="openshift-marketplace/community-operators-8n975" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.177775 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-468bs\" (UniqueName: \"kubernetes.io/projected/e5b8263d-5b26-40f8-a344-761b9d19d252-kube-api-access-468bs\") pod \"community-operators-8n975\" (UID: \"e5b8263d-5b26-40f8-a344-761b9d19d252\") " pod="openshift-marketplace/community-operators-8n975" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.177826 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b8263d-5b26-40f8-a344-761b9d19d252-utilities\") pod \"community-operators-8n975\" (UID: \"e5b8263d-5b26-40f8-a344-761b9d19d252\") " pod="openshift-marketplace/community-operators-8n975" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.178254 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b8263d-5b26-40f8-a344-761b9d19d252-catalog-content\") pod \"community-operators-8n975\" (UID: \"e5b8263d-5b26-40f8-a344-761b9d19d252\") " pod="openshift-marketplace/community-operators-8n975" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.178291 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b8263d-5b26-40f8-a344-761b9d19d252-utilities\") pod \"community-operators-8n975\" (UID: \"e5b8263d-5b26-40f8-a344-761b9d19d252\") " pod="openshift-marketplace/community-operators-8n975" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.195764 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-468bs\" (UniqueName: \"kubernetes.io/projected/e5b8263d-5b26-40f8-a344-761b9d19d252-kube-api-access-468bs\") pod \"community-operators-8n975\" (UID: \"e5b8263d-5b26-40f8-a344-761b9d19d252\") " pod="openshift-marketplace/community-operators-8n975" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.373118 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n975" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.655109 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zzf6q"] Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.656893 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.659666 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.662750 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zzf6q"] Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.686843 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897ec217-5614-490e-893e-52e2f87b7422-utilities\") pod \"certified-operators-zzf6q\" (UID: \"897ec217-5614-490e-893e-52e2f87b7422\") " pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.687287 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpwlv\" (UniqueName: \"kubernetes.io/projected/897ec217-5614-490e-893e-52e2f87b7422-kube-api-access-mpwlv\") pod \"certified-operators-zzf6q\" (UID: \"897ec217-5614-490e-893e-52e2f87b7422\") " pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.687325 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897ec217-5614-490e-893e-52e2f87b7422-catalog-content\") pod \"certified-operators-zzf6q\" (UID: \"897ec217-5614-490e-893e-52e2f87b7422\") " pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.764780 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8n975"] Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.788738 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897ec217-5614-490e-893e-52e2f87b7422-utilities\") pod \"certified-operators-zzf6q\" (UID: \"897ec217-5614-490e-893e-52e2f87b7422\") " pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.788795 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpwlv\" (UniqueName: \"kubernetes.io/projected/897ec217-5614-490e-893e-52e2f87b7422-kube-api-access-mpwlv\") pod \"certified-operators-zzf6q\" (UID: \"897ec217-5614-490e-893e-52e2f87b7422\") " pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.788829 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897ec217-5614-490e-893e-52e2f87b7422-catalog-content\") pod \"certified-operators-zzf6q\" (UID: \"897ec217-5614-490e-893e-52e2f87b7422\") " pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.789244 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897ec217-5614-490e-893e-52e2f87b7422-utilities\") pod \"certified-operators-zzf6q\" (UID: \"897ec217-5614-490e-893e-52e2f87b7422\") " pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.789298 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897ec217-5614-490e-893e-52e2f87b7422-catalog-content\") pod \"certified-operators-zzf6q\" (UID: \"897ec217-5614-490e-893e-52e2f87b7422\") " pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.808458 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpwlv\" (UniqueName: \"kubernetes.io/projected/897ec217-5614-490e-893e-52e2f87b7422-kube-api-access-mpwlv\") pod \"certified-operators-zzf6q\" (UID: \"897ec217-5614-490e-893e-52e2f87b7422\") " pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.939065 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzlbb" event={"ID":"66dab92d-4fda-4b03-82a4-9ceb5638b114","Type":"ContainerStarted","Data":"94b2ad4abfe748716e1e9e5ffbacae8dfc16315f0b760acc33a43d3e7f089ba1"} Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.940437 4768 generic.go:334] "Generic (PLEG): container finished" podID="e5b8263d-5b26-40f8-a344-761b9d19d252" containerID="55f703c2e69279f803c24cd7eecb3ef4367dbc5cea24bf8edb6c5cb0e92b01e3" exitCode=0 Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.940863 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n975" event={"ID":"e5b8263d-5b26-40f8-a344-761b9d19d252","Type":"ContainerDied","Data":"55f703c2e69279f803c24cd7eecb3ef4367dbc5cea24bf8edb6c5cb0e92b01e3"} Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.940957 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n975" event={"ID":"e5b8263d-5b26-40f8-a344-761b9d19d252","Type":"ContainerStarted","Data":"14195268c57b7631dfc6e599a7fa467c80ae8b41c902d56aa1c3622e048f12f2"} Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.942460 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd76t" event={"ID":"8ebedabf-6ef4-463c-98d0-d2afea402f61","Type":"ContainerStarted","Data":"cb99899888429652d74695d8991f11ab8996ebf9abed298c9f682d2688e22846"} Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.960215 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qzlbb" podStartSLOduration=2.520010826 podStartE2EDuration="3.960197544s" podCreationTimestamp="2025-11-24 17:53:48 +0000 UTC" firstStartedPulling="2025-11-24 17:53:49.918034646 +0000 UTC m=+268.778616423" lastFinishedPulling="2025-11-24 17:53:51.358221344 +0000 UTC m=+270.218803141" observedRunningTime="2025-11-24 17:53:51.958411653 +0000 UTC m=+270.818993440" watchObservedRunningTime="2025-11-24 17:53:51.960197544 +0000 UTC m=+270.820779311" Nov 24 17:53:51 crc kubenswrapper[4768]: I1124 17:53:51.982761 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:53:52 crc kubenswrapper[4768]: I1124 17:53:52.358954 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zzf6q"] Nov 24 17:53:52 crc kubenswrapper[4768]: W1124 17:53:52.387210 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod897ec217_5614_490e_893e_52e2f87b7422.slice/crio-ed20f1d6fc627134322c5124befd3c12a02f42edc7f5e8518bce5fc65d00fe42 WatchSource:0}: Error finding container ed20f1d6fc627134322c5124befd3c12a02f42edc7f5e8518bce5fc65d00fe42: Status 404 returned error can't find the container with id ed20f1d6fc627134322c5124befd3c12a02f42edc7f5e8518bce5fc65d00fe42 Nov 24 17:53:52 crc kubenswrapper[4768]: I1124 17:53:52.948700 4768 generic.go:334] "Generic (PLEG): container finished" podID="8ebedabf-6ef4-463c-98d0-d2afea402f61" containerID="cb99899888429652d74695d8991f11ab8996ebf9abed298c9f682d2688e22846" exitCode=0 Nov 24 17:53:52 crc kubenswrapper[4768]: I1124 17:53:52.948761 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd76t" event={"ID":"8ebedabf-6ef4-463c-98d0-d2afea402f61","Type":"ContainerDied","Data":"cb99899888429652d74695d8991f11ab8996ebf9abed298c9f682d2688e22846"} Nov 24 17:53:52 crc kubenswrapper[4768]: I1124 17:53:52.950908 4768 generic.go:334] "Generic (PLEG): container finished" podID="897ec217-5614-490e-893e-52e2f87b7422" containerID="3b0d7ed3258fac4bab7163a55e1906ec0dc4b92ec306bbb59e565df493e83d18" exitCode=0 Nov 24 17:53:52 crc kubenswrapper[4768]: I1124 17:53:52.950956 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzf6q" event={"ID":"897ec217-5614-490e-893e-52e2f87b7422","Type":"ContainerDied","Data":"3b0d7ed3258fac4bab7163a55e1906ec0dc4b92ec306bbb59e565df493e83d18"} Nov 24 17:53:52 crc kubenswrapper[4768]: I1124 17:53:52.950974 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzf6q" event={"ID":"897ec217-5614-490e-893e-52e2f87b7422","Type":"ContainerStarted","Data":"ed20f1d6fc627134322c5124befd3c12a02f42edc7f5e8518bce5fc65d00fe42"} Nov 24 17:53:52 crc kubenswrapper[4768]: I1124 17:53:52.953840 4768 generic.go:334] "Generic (PLEG): container finished" podID="e5b8263d-5b26-40f8-a344-761b9d19d252" containerID="94b8c42fbedee9256bc10c09b571d331642d926d3f1375ae1c4f6b93e8429214" exitCode=0 Nov 24 17:53:52 crc kubenswrapper[4768]: I1124 17:53:52.954870 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n975" event={"ID":"e5b8263d-5b26-40f8-a344-761b9d19d252","Type":"ContainerDied","Data":"94b8c42fbedee9256bc10c09b571d331642d926d3f1375ae1c4f6b93e8429214"} Nov 24 17:53:54 crc kubenswrapper[4768]: I1124 17:53:54.967903 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd76t" event={"ID":"8ebedabf-6ef4-463c-98d0-d2afea402f61","Type":"ContainerStarted","Data":"1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443"} Nov 24 17:53:54 crc kubenswrapper[4768]: I1124 17:53:54.972668 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzf6q" event={"ID":"897ec217-5614-490e-893e-52e2f87b7422","Type":"ContainerStarted","Data":"f56ca54822029768a00aa9ef3f3d65afb4a5d3420beac7790d9f7862af2a0dd1"} Nov 24 17:53:54 crc kubenswrapper[4768]: I1124 17:53:54.974752 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n975" event={"ID":"e5b8263d-5b26-40f8-a344-761b9d19d252","Type":"ContainerStarted","Data":"03601c57ff3a5c2f675d876c0862d26ab0c9fd72bca19bbe976edde7b8e4e2ac"} Nov 24 17:53:55 crc kubenswrapper[4768]: I1124 17:53:55.011442 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cd76t" podStartSLOduration=3.607493867 podStartE2EDuration="6.011426097s" podCreationTimestamp="2025-11-24 17:53:49 +0000 UTC" firstStartedPulling="2025-11-24 17:53:50.929847307 +0000 UTC m=+269.790429124" lastFinishedPulling="2025-11-24 17:53:53.333779587 +0000 UTC m=+272.194361354" observedRunningTime="2025-11-24 17:53:54.990007491 +0000 UTC m=+273.850589268" watchObservedRunningTime="2025-11-24 17:53:55.011426097 +0000 UTC m=+273.872007874" Nov 24 17:53:55 crc kubenswrapper[4768]: I1124 17:53:55.012873 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8n975" podStartSLOduration=2.605857038 podStartE2EDuration="4.012869088s" podCreationTimestamp="2025-11-24 17:53:51 +0000 UTC" firstStartedPulling="2025-11-24 17:53:51.941545586 +0000 UTC m=+270.802127373" lastFinishedPulling="2025-11-24 17:53:53.348557656 +0000 UTC m=+272.209139423" observedRunningTime="2025-11-24 17:53:55.011237371 +0000 UTC m=+273.871819148" watchObservedRunningTime="2025-11-24 17:53:55.012869088 +0000 UTC m=+273.873450865" Nov 24 17:53:55 crc kubenswrapper[4768]: I1124 17:53:55.982775 4768 generic.go:334] "Generic (PLEG): container finished" podID="897ec217-5614-490e-893e-52e2f87b7422" containerID="f56ca54822029768a00aa9ef3f3d65afb4a5d3420beac7790d9f7862af2a0dd1" exitCode=0 Nov 24 17:53:55 crc kubenswrapper[4768]: I1124 17:53:55.982876 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzf6q" event={"ID":"897ec217-5614-490e-893e-52e2f87b7422","Type":"ContainerDied","Data":"f56ca54822029768a00aa9ef3f3d65afb4a5d3420beac7790d9f7862af2a0dd1"} Nov 24 17:53:56 crc kubenswrapper[4768]: I1124 17:53:56.990354 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzf6q" event={"ID":"897ec217-5614-490e-893e-52e2f87b7422","Type":"ContainerStarted","Data":"fb8ec67bf788812bef579c2597a6151eb95b6192e2cd6225d790f1f2853bce55"} Nov 24 17:53:57 crc kubenswrapper[4768]: I1124 17:53:57.011042 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zzf6q" podStartSLOduration=2.504837969 podStartE2EDuration="6.011023821s" podCreationTimestamp="2025-11-24 17:53:51 +0000 UTC" firstStartedPulling="2025-11-24 17:53:52.952150184 +0000 UTC m=+271.812731961" lastFinishedPulling="2025-11-24 17:53:56.458336046 +0000 UTC m=+275.318917813" observedRunningTime="2025-11-24 17:53:57.007334146 +0000 UTC m=+275.867915933" watchObservedRunningTime="2025-11-24 17:53:57.011023821 +0000 UTC m=+275.871605608" Nov 24 17:53:58 crc kubenswrapper[4768]: I1124 17:53:58.970410 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:58 crc kubenswrapper[4768]: I1124 17:53:58.970827 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:59 crc kubenswrapper[4768]: I1124 17:53:59.021933 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:59 crc kubenswrapper[4768]: I1124 17:53:59.082559 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qzlbb" Nov 24 17:53:59 crc kubenswrapper[4768]: I1124 17:53:59.588872 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:59 crc kubenswrapper[4768]: I1124 17:53:59.589238 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:53:59 crc kubenswrapper[4768]: I1124 17:53:59.628221 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:54:00 crc kubenswrapper[4768]: I1124 17:54:00.041313 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 17:54:01 crc kubenswrapper[4768]: I1124 17:54:01.374244 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8n975" Nov 24 17:54:01 crc kubenswrapper[4768]: I1124 17:54:01.374299 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8n975" Nov 24 17:54:01 crc kubenswrapper[4768]: I1124 17:54:01.411447 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8n975" Nov 24 17:54:01 crc kubenswrapper[4768]: I1124 17:54:01.983034 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:54:01 crc kubenswrapper[4768]: I1124 17:54:01.983113 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:54:02 crc kubenswrapper[4768]: I1124 17:54:02.025877 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:54:02 crc kubenswrapper[4768]: I1124 17:54:02.053625 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8n975" Nov 24 17:54:02 crc kubenswrapper[4768]: I1124 17:54:02.081262 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 17:55:13 crc kubenswrapper[4768]: I1124 17:55:13.656753 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:55:13 crc kubenswrapper[4768]: I1124 17:55:13.657553 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:55:43 crc kubenswrapper[4768]: I1124 17:55:43.656163 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:55:43 crc kubenswrapper[4768]: I1124 17:55:43.656687 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:56:13 crc kubenswrapper[4768]: I1124 17:56:13.656096 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:56:13 crc kubenswrapper[4768]: I1124 17:56:13.656765 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:56:13 crc kubenswrapper[4768]: I1124 17:56:13.656822 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:56:13 crc kubenswrapper[4768]: I1124 17:56:13.657651 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"886b914367f5a2fa9c56278a5ec2fd4868e1d9e80fd680b439865ae06b105406"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 17:56:13 crc kubenswrapper[4768]: I1124 17:56:13.657722 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://886b914367f5a2fa9c56278a5ec2fd4868e1d9e80fd680b439865ae06b105406" gracePeriod=600 Nov 24 17:56:13 crc kubenswrapper[4768]: I1124 17:56:13.807153 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="886b914367f5a2fa9c56278a5ec2fd4868e1d9e80fd680b439865ae06b105406" exitCode=0 Nov 24 17:56:13 crc kubenswrapper[4768]: I1124 17:56:13.807208 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"886b914367f5a2fa9c56278a5ec2fd4868e1d9e80fd680b439865ae06b105406"} Nov 24 17:56:13 crc kubenswrapper[4768]: I1124 17:56:13.807338 4768 scope.go:117] "RemoveContainer" containerID="cc1584532482b1aa0f6cbdef30a2d09d3f2a5ba6f610242c8f15433eda071c50" Nov 24 17:56:14 crc kubenswrapper[4768]: I1124 17:56:14.817530 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"5b11ee9a43148b8f430bd2257b4fc5d4ab0802be7470cf787730b8c0e93d7060"} Nov 24 17:57:22 crc kubenswrapper[4768]: I1124 17:57:22.960780 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-skf28"] Nov 24 17:57:22 crc kubenswrapper[4768]: I1124 17:57:22.962234 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:22 crc kubenswrapper[4768]: I1124 17:57:22.980377 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-skf28"] Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.112178 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db12729b-7890-4e01-9278-f50fdafa4b4a-registry-certificates\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.112222 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db12729b-7890-4e01-9278-f50fdafa4b4a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.112249 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db12729b-7890-4e01-9278-f50fdafa4b4a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.112279 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5t2s\" (UniqueName: \"kubernetes.io/projected/db12729b-7890-4e01-9278-f50fdafa4b4a-kube-api-access-n5t2s\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.112299 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db12729b-7890-4e01-9278-f50fdafa4b4a-trusted-ca\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.112422 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db12729b-7890-4e01-9278-f50fdafa4b4a-registry-tls\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.112529 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.112564 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db12729b-7890-4e01-9278-f50fdafa4b4a-bound-sa-token\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.175892 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.213685 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db12729b-7890-4e01-9278-f50fdafa4b4a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.213732 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db12729b-7890-4e01-9278-f50fdafa4b4a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.213778 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5t2s\" (UniqueName: \"kubernetes.io/projected/db12729b-7890-4e01-9278-f50fdafa4b4a-kube-api-access-n5t2s\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.213798 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db12729b-7890-4e01-9278-f50fdafa4b4a-trusted-ca\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.213821 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db12729b-7890-4e01-9278-f50fdafa4b4a-registry-tls\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.213844 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db12729b-7890-4e01-9278-f50fdafa4b4a-bound-sa-token\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.213873 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db12729b-7890-4e01-9278-f50fdafa4b4a-registry-certificates\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.214216 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db12729b-7890-4e01-9278-f50fdafa4b4a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.214900 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db12729b-7890-4e01-9278-f50fdafa4b4a-registry-certificates\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.215074 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db12729b-7890-4e01-9278-f50fdafa4b4a-trusted-ca\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.219865 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db12729b-7890-4e01-9278-f50fdafa4b4a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.219906 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db12729b-7890-4e01-9278-f50fdafa4b4a-registry-tls\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.229872 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5t2s\" (UniqueName: \"kubernetes.io/projected/db12729b-7890-4e01-9278-f50fdafa4b4a-kube-api-access-n5t2s\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.231143 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db12729b-7890-4e01-9278-f50fdafa4b4a-bound-sa-token\") pod \"image-registry-66df7c8f76-skf28\" (UID: \"db12729b-7890-4e01-9278-f50fdafa4b4a\") " pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.276325 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:23 crc kubenswrapper[4768]: I1124 17:57:23.664300 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-skf28"] Nov 24 17:57:24 crc kubenswrapper[4768]: I1124 17:57:24.212406 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-skf28" event={"ID":"db12729b-7890-4e01-9278-f50fdafa4b4a","Type":"ContainerStarted","Data":"f98324fa9ecdc9fb149bc3e3ba33885e13e27d7c4723fcda43506568d35159d6"} Nov 24 17:57:24 crc kubenswrapper[4768]: I1124 17:57:24.212869 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-skf28" event={"ID":"db12729b-7890-4e01-9278-f50fdafa4b4a","Type":"ContainerStarted","Data":"0e0c1746a205a92460ef0d12cb9ff1be0182824c73a1ef5799a246850bfe02d3"} Nov 24 17:57:24 crc kubenswrapper[4768]: I1124 17:57:24.212908 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:24 crc kubenswrapper[4768]: I1124 17:57:24.234984 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-skf28" podStartSLOduration=2.234962067 podStartE2EDuration="2.234962067s" podCreationTimestamp="2025-11-24 17:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:57:24.231639124 +0000 UTC m=+483.092220991" watchObservedRunningTime="2025-11-24 17:57:24.234962067 +0000 UTC m=+483.095543854" Nov 24 17:57:43 crc kubenswrapper[4768]: I1124 17:57:43.283178 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-skf28" Nov 24 17:57:43 crc kubenswrapper[4768]: I1124 17:57:43.342309 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zzvkd"] Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.390281 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" podUID="3bd473c0-17b2-4d7c-830a-99afe5266762" containerName="registry" containerID="cri-o://b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512" gracePeriod=30 Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.736853 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.895831 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n25cm\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-kube-api-access-n25cm\") pod \"3bd473c0-17b2-4d7c-830a-99afe5266762\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.895895 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-bound-sa-token\") pod \"3bd473c0-17b2-4d7c-830a-99afe5266762\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.895940 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bd473c0-17b2-4d7c-830a-99afe5266762-registry-certificates\") pod \"3bd473c0-17b2-4d7c-830a-99afe5266762\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.896277 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"3bd473c0-17b2-4d7c-830a-99afe5266762\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.896342 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-registry-tls\") pod \"3bd473c0-17b2-4d7c-830a-99afe5266762\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.896380 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bd473c0-17b2-4d7c-830a-99afe5266762-ca-trust-extracted\") pod \"3bd473c0-17b2-4d7c-830a-99afe5266762\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.896458 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bd473c0-17b2-4d7c-830a-99afe5266762-trusted-ca\") pod \"3bd473c0-17b2-4d7c-830a-99afe5266762\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.896540 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bd473c0-17b2-4d7c-830a-99afe5266762-installation-pull-secrets\") pod \"3bd473c0-17b2-4d7c-830a-99afe5266762\" (UID: \"3bd473c0-17b2-4d7c-830a-99afe5266762\") " Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.898037 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd473c0-17b2-4d7c-830a-99afe5266762-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "3bd473c0-17b2-4d7c-830a-99afe5266762" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.898721 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd473c0-17b2-4d7c-830a-99afe5266762-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3bd473c0-17b2-4d7c-830a-99afe5266762" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.906694 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd473c0-17b2-4d7c-830a-99afe5266762-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "3bd473c0-17b2-4d7c-830a-99afe5266762" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.907825 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-kube-api-access-n25cm" (OuterVolumeSpecName: "kube-api-access-n25cm") pod "3bd473c0-17b2-4d7c-830a-99afe5266762" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762"). InnerVolumeSpecName "kube-api-access-n25cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.907961 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "3bd473c0-17b2-4d7c-830a-99afe5266762" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.908460 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "3bd473c0-17b2-4d7c-830a-99afe5266762" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.913317 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "3bd473c0-17b2-4d7c-830a-99afe5266762" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.918119 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd473c0-17b2-4d7c-830a-99afe5266762-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "3bd473c0-17b2-4d7c-830a-99afe5266762" (UID: "3bd473c0-17b2-4d7c-830a-99afe5266762"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.998147 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n25cm\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-kube-api-access-n25cm\") on node \"crc\" DevicePath \"\"" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.998200 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.998213 4768 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bd473c0-17b2-4d7c-830a-99afe5266762-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.998228 4768 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bd473c0-17b2-4d7c-830a-99afe5266762-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.998242 4768 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bd473c0-17b2-4d7c-830a-99afe5266762-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.998254 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bd473c0-17b2-4d7c-830a-99afe5266762-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:58:08 crc kubenswrapper[4768]: I1124 17:58:08.998265 4768 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bd473c0-17b2-4d7c-830a-99afe5266762-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 17:58:09 crc kubenswrapper[4768]: I1124 17:58:09.488407 4768 generic.go:334] "Generic (PLEG): container finished" podID="3bd473c0-17b2-4d7c-830a-99afe5266762" containerID="b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512" exitCode=0 Nov 24 17:58:09 crc kubenswrapper[4768]: I1124 17:58:09.488442 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" Nov 24 17:58:09 crc kubenswrapper[4768]: I1124 17:58:09.488446 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" event={"ID":"3bd473c0-17b2-4d7c-830a-99afe5266762","Type":"ContainerDied","Data":"b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512"} Nov 24 17:58:09 crc kubenswrapper[4768]: I1124 17:58:09.488513 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zzvkd" event={"ID":"3bd473c0-17b2-4d7c-830a-99afe5266762","Type":"ContainerDied","Data":"ff4385609ffbb76fe82855a6fd39b4877e1b18acb5003b4a51519cfb506ca5cb"} Nov 24 17:58:09 crc kubenswrapper[4768]: I1124 17:58:09.488529 4768 scope.go:117] "RemoveContainer" containerID="b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512" Nov 24 17:58:09 crc kubenswrapper[4768]: I1124 17:58:09.516982 4768 scope.go:117] "RemoveContainer" containerID="b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512" Nov 24 17:58:09 crc kubenswrapper[4768]: I1124 17:58:09.517150 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zzvkd"] Nov 24 17:58:09 crc kubenswrapper[4768]: E1124 17:58:09.517818 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512\": container with ID starting with b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512 not found: ID does not exist" containerID="b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512" Nov 24 17:58:09 crc kubenswrapper[4768]: I1124 17:58:09.517919 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512"} err="failed to get container status \"b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512\": rpc error: code = NotFound desc = could not find container \"b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512\": container with ID starting with b0171ad958d9ab5a092566c7030f4aea13bbff262b1f2a3aa998844b6677c512 not found: ID does not exist" Nov 24 17:58:09 crc kubenswrapper[4768]: I1124 17:58:09.523294 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zzvkd"] Nov 24 17:58:09 crc kubenswrapper[4768]: I1124 17:58:09.912442 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd473c0-17b2-4d7c-830a-99afe5266762" path="/var/lib/kubelet/pods/3bd473c0-17b2-4d7c-830a-99afe5266762/volumes" Nov 24 17:58:13 crc kubenswrapper[4768]: I1124 17:58:13.656928 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:58:13 crc kubenswrapper[4768]: I1124 17:58:13.657266 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:58:43 crc kubenswrapper[4768]: I1124 17:58:43.656631 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:58:43 crc kubenswrapper[4768]: I1124 17:58:43.657748 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:59:13 crc kubenswrapper[4768]: I1124 17:59:13.656192 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:59:13 crc kubenswrapper[4768]: I1124 17:59:13.656950 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:59:13 crc kubenswrapper[4768]: I1124 17:59:13.657033 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 17:59:13 crc kubenswrapper[4768]: I1124 17:59:13.657942 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b11ee9a43148b8f430bd2257b4fc5d4ab0802be7470cf787730b8c0e93d7060"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 17:59:13 crc kubenswrapper[4768]: I1124 17:59:13.658044 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://5b11ee9a43148b8f430bd2257b4fc5d4ab0802be7470cf787730b8c0e93d7060" gracePeriod=600 Nov 24 17:59:14 crc kubenswrapper[4768]: I1124 17:59:14.305827 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="5b11ee9a43148b8f430bd2257b4fc5d4ab0802be7470cf787730b8c0e93d7060" exitCode=0 Nov 24 17:59:14 crc kubenswrapper[4768]: I1124 17:59:14.306096 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"5b11ee9a43148b8f430bd2257b4fc5d4ab0802be7470cf787730b8c0e93d7060"} Nov 24 17:59:14 crc kubenswrapper[4768]: I1124 17:59:14.306660 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"b4583a9ac279158eca4e8f57a4180ced088f2fed29490556a10e250154558a77"} Nov 24 17:59:14 crc kubenswrapper[4768]: I1124 17:59:14.306693 4768 scope.go:117] "RemoveContainer" containerID="886b914367f5a2fa9c56278a5ec2fd4868e1d9e80fd680b439865ae06b105406" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.155815 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8nrg2"] Nov 24 17:59:34 crc kubenswrapper[4768]: E1124 17:59:34.156960 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd473c0-17b2-4d7c-830a-99afe5266762" containerName="registry" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.156974 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd473c0-17b2-4d7c-830a-99afe5266762" containerName="registry" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.157091 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd473c0-17b2-4d7c-830a-99afe5266762" containerName="registry" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.157649 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-8nrg2" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.160981 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.161195 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.161433 4768 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-z6grc" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.162591 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-66xg6"] Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.163623 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-66xg6" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.169472 4768 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-nhqpn" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.171757 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8nrg2"] Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.183980 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-66xg6"] Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.197694 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2qvx7"] Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.198468 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2qvx7" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.200876 4768 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dt98k" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.209986 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2qvx7"] Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.281293 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65bwr\" (UniqueName: \"kubernetes.io/projected/62d5c0eb-892b-455f-8ddd-b2fdb47ea42d-kube-api-access-65bwr\") pod \"cert-manager-cainjector-7f985d654d-8nrg2\" (UID: \"62d5c0eb-892b-455f-8ddd-b2fdb47ea42d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8nrg2" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.281367 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9z5j\" (UniqueName: \"kubernetes.io/projected/24caa3d8-4ce8-4918-82c5-2c71e2b95e01-kube-api-access-v9z5j\") pod \"cert-manager-5b446d88c5-66xg6\" (UID: \"24caa3d8-4ce8-4918-82c5-2c71e2b95e01\") " pod="cert-manager/cert-manager-5b446d88c5-66xg6" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.382467 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65bwr\" (UniqueName: \"kubernetes.io/projected/62d5c0eb-892b-455f-8ddd-b2fdb47ea42d-kube-api-access-65bwr\") pod \"cert-manager-cainjector-7f985d654d-8nrg2\" (UID: \"62d5c0eb-892b-455f-8ddd-b2fdb47ea42d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8nrg2" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.382534 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9z5j\" (UniqueName: \"kubernetes.io/projected/24caa3d8-4ce8-4918-82c5-2c71e2b95e01-kube-api-access-v9z5j\") pod \"cert-manager-5b446d88c5-66xg6\" (UID: \"24caa3d8-4ce8-4918-82c5-2c71e2b95e01\") " pod="cert-manager/cert-manager-5b446d88c5-66xg6" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.382589 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cflkn\" (UniqueName: \"kubernetes.io/projected/3d150fe0-3a31-4024-b158-8dd172e9aa1e-kube-api-access-cflkn\") pod \"cert-manager-webhook-5655c58dd6-2qvx7\" (UID: \"3d150fe0-3a31-4024-b158-8dd172e9aa1e\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2qvx7" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.404913 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9z5j\" (UniqueName: \"kubernetes.io/projected/24caa3d8-4ce8-4918-82c5-2c71e2b95e01-kube-api-access-v9z5j\") pod \"cert-manager-5b446d88c5-66xg6\" (UID: \"24caa3d8-4ce8-4918-82c5-2c71e2b95e01\") " pod="cert-manager/cert-manager-5b446d88c5-66xg6" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.405577 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65bwr\" (UniqueName: \"kubernetes.io/projected/62d5c0eb-892b-455f-8ddd-b2fdb47ea42d-kube-api-access-65bwr\") pod \"cert-manager-cainjector-7f985d654d-8nrg2\" (UID: \"62d5c0eb-892b-455f-8ddd-b2fdb47ea42d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8nrg2" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.480132 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-8nrg2" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.483264 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cflkn\" (UniqueName: \"kubernetes.io/projected/3d150fe0-3a31-4024-b158-8dd172e9aa1e-kube-api-access-cflkn\") pod \"cert-manager-webhook-5655c58dd6-2qvx7\" (UID: \"3d150fe0-3a31-4024-b158-8dd172e9aa1e\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2qvx7" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.496261 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-66xg6" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.503259 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cflkn\" (UniqueName: \"kubernetes.io/projected/3d150fe0-3a31-4024-b158-8dd172e9aa1e-kube-api-access-cflkn\") pod \"cert-manager-webhook-5655c58dd6-2qvx7\" (UID: \"3d150fe0-3a31-4024-b158-8dd172e9aa1e\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2qvx7" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.512833 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2qvx7" Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.692949 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8nrg2"] Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.706113 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.755874 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-66xg6"] Nov 24 17:59:34 crc kubenswrapper[4768]: W1124 17:59:34.762556 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24caa3d8_4ce8_4918_82c5_2c71e2b95e01.slice/crio-817c18243331e78088c1c01da43d5aff000f1bb33926814699e726f376d606f8 WatchSource:0}: Error finding container 817c18243331e78088c1c01da43d5aff000f1bb33926814699e726f376d606f8: Status 404 returned error can't find the container with id 817c18243331e78088c1c01da43d5aff000f1bb33926814699e726f376d606f8 Nov 24 17:59:34 crc kubenswrapper[4768]: I1124 17:59:34.770816 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2qvx7"] Nov 24 17:59:35 crc kubenswrapper[4768]: I1124 17:59:35.492763 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-8nrg2" event={"ID":"62d5c0eb-892b-455f-8ddd-b2fdb47ea42d","Type":"ContainerStarted","Data":"dd080124fd3ad5f67db4705a5b8e1f7386c9cdc5874b7d3defeb224c272d2def"} Nov 24 17:59:35 crc kubenswrapper[4768]: I1124 17:59:35.494288 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2qvx7" event={"ID":"3d150fe0-3a31-4024-b158-8dd172e9aa1e","Type":"ContainerStarted","Data":"065721c1da478d11f3fe55569c591bdbb76fa0f813db56c804087e312064ca9b"} Nov 24 17:59:35 crc kubenswrapper[4768]: I1124 17:59:35.495658 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-66xg6" event={"ID":"24caa3d8-4ce8-4918-82c5-2c71e2b95e01","Type":"ContainerStarted","Data":"817c18243331e78088c1c01da43d5aff000f1bb33926814699e726f376d606f8"} Nov 24 17:59:38 crc kubenswrapper[4768]: I1124 17:59:38.513439 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-66xg6" event={"ID":"24caa3d8-4ce8-4918-82c5-2c71e2b95e01","Type":"ContainerStarted","Data":"cc67457b8036625ff65878efe4c6bedf9af72dbf3fc95a9c9eda5c21bcbd7c02"} Nov 24 17:59:38 crc kubenswrapper[4768]: I1124 17:59:38.514888 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-8nrg2" event={"ID":"62d5c0eb-892b-455f-8ddd-b2fdb47ea42d","Type":"ContainerStarted","Data":"798dfad50d45ee4396e619d94b594f0e478e59edb0c6e945641d52203805f801"} Nov 24 17:59:38 crc kubenswrapper[4768]: I1124 17:59:38.516274 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2qvx7" event={"ID":"3d150fe0-3a31-4024-b158-8dd172e9aa1e","Type":"ContainerStarted","Data":"60e5e03e719f8e77d050944c8d269c1cb8b871d720be2239ced19a0655f9a89a"} Nov 24 17:59:38 crc kubenswrapper[4768]: I1124 17:59:38.516404 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-2qvx7" Nov 24 17:59:38 crc kubenswrapper[4768]: I1124 17:59:38.534062 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-66xg6" podStartSLOduration=1.098157044 podStartE2EDuration="4.534043766s" podCreationTimestamp="2025-11-24 17:59:34 +0000 UTC" firstStartedPulling="2025-11-24 17:59:34.767874844 +0000 UTC m=+613.628456621" lastFinishedPulling="2025-11-24 17:59:38.203761536 +0000 UTC m=+617.064343343" observedRunningTime="2025-11-24 17:59:38.530229388 +0000 UTC m=+617.390811175" watchObservedRunningTime="2025-11-24 17:59:38.534043766 +0000 UTC m=+617.394625543" Nov 24 17:59:38 crc kubenswrapper[4768]: I1124 17:59:38.546640 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-2qvx7" podStartSLOduration=1.14688672 podStartE2EDuration="4.546585193s" podCreationTimestamp="2025-11-24 17:59:34 +0000 UTC" firstStartedPulling="2025-11-24 17:59:34.786520264 +0000 UTC m=+613.647102041" lastFinishedPulling="2025-11-24 17:59:38.186218727 +0000 UTC m=+617.046800514" observedRunningTime="2025-11-24 17:59:38.544090922 +0000 UTC m=+617.404672699" watchObservedRunningTime="2025-11-24 17:59:38.546585193 +0000 UTC m=+617.407166970" Nov 24 17:59:38 crc kubenswrapper[4768]: I1124 17:59:38.564025 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-8nrg2" podStartSLOduration=1.073006347 podStartE2EDuration="4.564002979s" podCreationTimestamp="2025-11-24 17:59:34 +0000 UTC" firstStartedPulling="2025-11-24 17:59:34.705719034 +0000 UTC m=+613.566300821" lastFinishedPulling="2025-11-24 17:59:38.196715676 +0000 UTC m=+617.057297453" observedRunningTime="2025-11-24 17:59:38.560832069 +0000 UTC m=+617.421413846" watchObservedRunningTime="2025-11-24 17:59:38.564002979 +0000 UTC m=+617.424584756" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.377606 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w2gjr"] Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.378595 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovn-controller" containerID="cri-o://16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369" gracePeriod=30 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.378751 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="sbdb" containerID="cri-o://e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006" gracePeriod=30 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.378803 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="northd" containerID="cri-o://1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3" gracePeriod=30 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.378768 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec" gracePeriod=30 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.378742 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="nbdb" containerID="cri-o://a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f" gracePeriod=30 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.378904 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovn-acl-logging" containerID="cri-o://53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6" gracePeriod=30 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.378878 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="kube-rbac-proxy-node" containerID="cri-o://a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8" gracePeriod=30 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.413924 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" containerID="cri-o://a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831" gracePeriod=30 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.517227 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-2qvx7" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.559432 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovnkube-controller/3.log" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.562047 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovn-acl-logging/0.log" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.562612 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovn-controller/0.log" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.563091 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831" exitCode=0 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.563119 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec" exitCode=0 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.563128 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8" exitCode=0 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.563136 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6" exitCode=143 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.563144 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369" exitCode=143 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.563166 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831"} Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.563211 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec"} Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.563227 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8"} Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.563239 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6"} Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.563248 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369"} Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.563254 4768 scope.go:117] "RemoveContainer" containerID="27c76ffb136717df22c456e7d03db2b8228eab2442df0a21f048d134e7fe7af8" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.565252 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vssnl_895270a4-4f6a-4be4-9701-8a0f9cbf73d7/kube-multus/2.log" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.565794 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vssnl_895270a4-4f6a-4be4-9701-8a0f9cbf73d7/kube-multus/1.log" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.565850 4768 generic.go:334] "Generic (PLEG): container finished" podID="895270a4-4f6a-4be4-9701-8a0f9cbf73d7" containerID="7cd36c7ee341731a5eab683195734326510c57c98fea98906e0139f89383ce09" exitCode=2 Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.565873 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vssnl" event={"ID":"895270a4-4f6a-4be4-9701-8a0f9cbf73d7","Type":"ContainerDied","Data":"7cd36c7ee341731a5eab683195734326510c57c98fea98906e0139f89383ce09"} Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.566455 4768 scope.go:117] "RemoveContainer" containerID="7cd36c7ee341731a5eab683195734326510c57c98fea98906e0139f89383ce09" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.566684 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-vssnl_openshift-multus(895270a4-4f6a-4be4-9701-8a0f9cbf73d7)\"" pod="openshift-multus/multus-vssnl" podUID="895270a4-4f6a-4be4-9701-8a0f9cbf73d7" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.604438 4768 scope.go:117] "RemoveContainer" containerID="344484ec32fe5f65cce2d4cb54a12496a32add2fb0a678735b23d75dacfd3ea2" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.801609 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovn-acl-logging/0.log" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.802542 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovn-controller/0.log" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.803407 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840641 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-log-socket\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840698 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-openvswitch\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840726 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-systemd\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840759 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-node-log\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840771 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-log-socket" (OuterVolumeSpecName: "log-socket") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840792 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-ovn\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840828 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840828 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840860 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-node-log" (OuterVolumeSpecName: "node-log") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840880 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-slash\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840922 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840951 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-env-overrides\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840981 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-cni-bin\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840988 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-slash" (OuterVolumeSpecName: "host-slash") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.840996 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841011 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovn-node-metrics-cert\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841032 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841040 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-var-lib-openvswitch\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841082 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovnkube-config\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841112 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dhc7\" (UniqueName: \"kubernetes.io/projected/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-kube-api-access-4dhc7\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841318 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-cni-netd\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841328 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841378 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-run-ovn-kubernetes\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841413 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-etc-openvswitch\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841457 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-kubelet\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841514 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-run-netns\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841546 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-systemd-units\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841592 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovnkube-script-lib\") pod \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\" (UID: \"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb\") " Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841874 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841904 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841939 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841959 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.841995 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842042 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842377 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842666 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842672 4768 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842749 4768 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842770 4768 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842787 4768 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842803 4768 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842806 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842824 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842842 4768 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842855 4768 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842870 4768 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842882 4768 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842894 4768 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842907 4768 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842922 4768 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842936 4768 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.842948 4768 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.849458 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.850454 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-kube-api-access-4dhc7" (OuterVolumeSpecName: "kube-api-access-4dhc7") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "kube-api-access-4dhc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.857660 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" (UID: "938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870015 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qmrps"] Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870251 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870272 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870283 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="kubecfg-setup" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870290 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="kubecfg-setup" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870305 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovn-acl-logging" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870312 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovn-acl-logging" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870319 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="nbdb" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870326 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="nbdb" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870336 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovn-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870343 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovn-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870355 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870362 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870371 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="northd" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870379 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="northd" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870387 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870393 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870404 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870411 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870424 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="kube-rbac-proxy-node" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870432 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="kube-rbac-proxy-node" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870441 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="sbdb" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870446 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="sbdb" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870571 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="sbdb" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870581 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870587 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870593 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="nbdb" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870600 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870607 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovn-acl-logging" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870615 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870622 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="kube-rbac-proxy-node" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870630 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="northd" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870637 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870645 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovn-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870724 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870731 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: E1124 17:59:44.870740 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870746 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.870828 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerName="ovnkube-controller" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.872318 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.944821 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-var-lib-openvswitch\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.944889 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-node-log\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.944939 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-systemd-units\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.944999 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e29728ac-3cb7-4a0e-b673-558743c3af88-ovnkube-script-lib\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945036 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-cni-netd\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945061 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-kubelet\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945086 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-run-ovn\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945159 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llhcx\" (UniqueName: \"kubernetes.io/projected/e29728ac-3cb7-4a0e-b673-558743c3af88-kube-api-access-llhcx\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945202 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-run-systemd\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945230 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e29728ac-3cb7-4a0e-b673-558743c3af88-ovn-node-metrics-cert\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945259 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e29728ac-3cb7-4a0e-b673-558743c3af88-env-overrides\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945287 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-etc-openvswitch\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945323 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-log-socket\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945382 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e29728ac-3cb7-4a0e-b673-558743c3af88-ovnkube-config\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945416 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-run-openvswitch\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945458 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945514 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-slash\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945543 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-run-netns\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945576 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-cni-bin\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945604 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-run-ovn-kubernetes\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945657 4768 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945672 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945687 4768 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945699 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:44 crc kubenswrapper[4768]: I1124 17:59:44.945712 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dhc7\" (UniqueName: \"kubernetes.io/projected/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb-kube-api-access-4dhc7\") on node \"crc\" DevicePath \"\"" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.047852 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.047929 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-slash\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.047952 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.047968 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-run-netns\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048010 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-cni-bin\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048018 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-run-netns\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048030 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-run-ovn-kubernetes\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048052 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-run-ovn-kubernetes\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048089 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-cni-bin\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048137 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-var-lib-openvswitch\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048094 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-slash\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048103 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-var-lib-openvswitch\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048209 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-node-log\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048241 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-systemd-units\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048277 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e29728ac-3cb7-4a0e-b673-558743c3af88-ovnkube-script-lib\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048313 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-node-log\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048314 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-cni-netd\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048344 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-cni-netd\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048381 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-kubelet\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048322 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-systemd-units\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048409 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-run-ovn\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048438 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-run-ovn\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048454 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llhcx\" (UniqueName: \"kubernetes.io/projected/e29728ac-3cb7-4a0e-b673-558743c3af88-kube-api-access-llhcx\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048510 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-run-systemd\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048457 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-host-kubelet\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048545 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e29728ac-3cb7-4a0e-b673-558743c3af88-ovn-node-metrics-cert\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048591 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-run-systemd\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048601 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e29728ac-3cb7-4a0e-b673-558743c3af88-env-overrides\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048649 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-etc-openvswitch\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048699 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-log-socket\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048743 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e29728ac-3cb7-4a0e-b673-558743c3af88-ovnkube-config\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048785 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-run-openvswitch\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048822 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-etc-openvswitch\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048835 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-log-socket\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.048922 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e29728ac-3cb7-4a0e-b673-558743c3af88-run-openvswitch\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.049445 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e29728ac-3cb7-4a0e-b673-558743c3af88-env-overrides\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.049571 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e29728ac-3cb7-4a0e-b673-558743c3af88-ovnkube-config\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.049919 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e29728ac-3cb7-4a0e-b673-558743c3af88-ovnkube-script-lib\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.052006 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e29728ac-3cb7-4a0e-b673-558743c3af88-ovn-node-metrics-cert\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.070444 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llhcx\" (UniqueName: \"kubernetes.io/projected/e29728ac-3cb7-4a0e-b673-558743c3af88-kube-api-access-llhcx\") pod \"ovnkube-node-qmrps\" (UID: \"e29728ac-3cb7-4a0e-b673-558743c3af88\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.190337 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.574359 4768 generic.go:334] "Generic (PLEG): container finished" podID="e29728ac-3cb7-4a0e-b673-558743c3af88" containerID="60ae532684e4f48e5c30c3db1ebe9c6c82160f5652c60ea5ab5f1b2e0a62b8c9" exitCode=0 Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.574444 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" event={"ID":"e29728ac-3cb7-4a0e-b673-558743c3af88","Type":"ContainerDied","Data":"60ae532684e4f48e5c30c3db1ebe9c6c82160f5652c60ea5ab5f1b2e0a62b8c9"} Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.574526 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" event={"ID":"e29728ac-3cb7-4a0e-b673-558743c3af88","Type":"ContainerStarted","Data":"1bcbdace6cb24422c5f78931af88d2f449db611081bb0573d0ab412867912db5"} Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.579803 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovn-acl-logging/0.log" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.580788 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2gjr_938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/ovn-controller/0.log" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.581341 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006" exitCode=0 Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.581391 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f" exitCode=0 Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.581402 4768 generic.go:334] "Generic (PLEG): container finished" podID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" containerID="1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3" exitCode=0 Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.581397 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006"} Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.581450 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f"} Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.581463 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3"} Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.581478 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" event={"ID":"938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb","Type":"ContainerDied","Data":"b9acfe70cba1fea9c53ebdb3678d91b368181acca63b937ef61622fe45e65ccb"} Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.581498 4768 scope.go:117] "RemoveContainer" containerID="a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.581542 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w2gjr" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.584983 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vssnl_895270a4-4f6a-4be4-9701-8a0f9cbf73d7/kube-multus/2.log" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.601244 4768 scope.go:117] "RemoveContainer" containerID="e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.627816 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w2gjr"] Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.633072 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w2gjr"] Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.634281 4768 scope.go:117] "RemoveContainer" containerID="a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.653232 4768 scope.go:117] "RemoveContainer" containerID="1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.669386 4768 scope.go:117] "RemoveContainer" containerID="7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.685546 4768 scope.go:117] "RemoveContainer" containerID="a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.704723 4768 scope.go:117] "RemoveContainer" containerID="53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.727602 4768 scope.go:117] "RemoveContainer" containerID="16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.741743 4768 scope.go:117] "RemoveContainer" containerID="a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.772833 4768 scope.go:117] "RemoveContainer" containerID="a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831" Nov 24 17:59:45 crc kubenswrapper[4768]: E1124 17:59:45.773922 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831\": container with ID starting with a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831 not found: ID does not exist" containerID="a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.774008 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831"} err="failed to get container status \"a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831\": rpc error: code = NotFound desc = could not find container \"a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831\": container with ID starting with a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.774216 4768 scope.go:117] "RemoveContainer" containerID="e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006" Nov 24 17:59:45 crc kubenswrapper[4768]: E1124 17:59:45.774677 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\": container with ID starting with e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006 not found: ID does not exist" containerID="e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.774721 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006"} err="failed to get container status \"e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\": rpc error: code = NotFound desc = could not find container \"e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\": container with ID starting with e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.774755 4768 scope.go:117] "RemoveContainer" containerID="a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f" Nov 24 17:59:45 crc kubenswrapper[4768]: E1124 17:59:45.775273 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\": container with ID starting with a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f not found: ID does not exist" containerID="a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.775315 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f"} err="failed to get container status \"a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\": rpc error: code = NotFound desc = could not find container \"a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\": container with ID starting with a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.775332 4768 scope.go:117] "RemoveContainer" containerID="1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3" Nov 24 17:59:45 crc kubenswrapper[4768]: E1124 17:59:45.775579 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\": container with ID starting with 1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3 not found: ID does not exist" containerID="1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.775596 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3"} err="failed to get container status \"1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\": rpc error: code = NotFound desc = could not find container \"1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\": container with ID starting with 1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.775607 4768 scope.go:117] "RemoveContainer" containerID="7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec" Nov 24 17:59:45 crc kubenswrapper[4768]: E1124 17:59:45.775783 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\": container with ID starting with 7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec not found: ID does not exist" containerID="7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.775803 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec"} err="failed to get container status \"7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\": rpc error: code = NotFound desc = could not find container \"7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\": container with ID starting with 7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.775815 4768 scope.go:117] "RemoveContainer" containerID="a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8" Nov 24 17:59:45 crc kubenswrapper[4768]: E1124 17:59:45.776376 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\": container with ID starting with a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8 not found: ID does not exist" containerID="a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.776436 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8"} err="failed to get container status \"a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\": rpc error: code = NotFound desc = could not find container \"a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\": container with ID starting with a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.776472 4768 scope.go:117] "RemoveContainer" containerID="53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6" Nov 24 17:59:45 crc kubenswrapper[4768]: E1124 17:59:45.777279 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\": container with ID starting with 53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6 not found: ID does not exist" containerID="53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.777319 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6"} err="failed to get container status \"53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\": rpc error: code = NotFound desc = could not find container \"53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\": container with ID starting with 53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.777339 4768 scope.go:117] "RemoveContainer" containerID="16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369" Nov 24 17:59:45 crc kubenswrapper[4768]: E1124 17:59:45.777586 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\": container with ID starting with 16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369 not found: ID does not exist" containerID="16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.777637 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369"} err="failed to get container status \"16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\": rpc error: code = NotFound desc = could not find container \"16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\": container with ID starting with 16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.777653 4768 scope.go:117] "RemoveContainer" containerID="a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574" Nov 24 17:59:45 crc kubenswrapper[4768]: E1124 17:59:45.777845 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\": container with ID starting with a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574 not found: ID does not exist" containerID="a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.777862 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574"} err="failed to get container status \"a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\": rpc error: code = NotFound desc = could not find container \"a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\": container with ID starting with a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.777875 4768 scope.go:117] "RemoveContainer" containerID="a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.779001 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831"} err="failed to get container status \"a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831\": rpc error: code = NotFound desc = could not find container \"a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831\": container with ID starting with a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.779053 4768 scope.go:117] "RemoveContainer" containerID="e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.780242 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006"} err="failed to get container status \"e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\": rpc error: code = NotFound desc = could not find container \"e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\": container with ID starting with e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.780317 4768 scope.go:117] "RemoveContainer" containerID="a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.782823 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f"} err="failed to get container status \"a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\": rpc error: code = NotFound desc = could not find container \"a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\": container with ID starting with a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.782852 4768 scope.go:117] "RemoveContainer" containerID="1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.783325 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3"} err="failed to get container status \"1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\": rpc error: code = NotFound desc = could not find container \"1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\": container with ID starting with 1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.783355 4768 scope.go:117] "RemoveContainer" containerID="7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.783803 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec"} err="failed to get container status \"7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\": rpc error: code = NotFound desc = could not find container \"7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\": container with ID starting with 7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.783858 4768 scope.go:117] "RemoveContainer" containerID="a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.784205 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8"} err="failed to get container status \"a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\": rpc error: code = NotFound desc = could not find container \"a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\": container with ID starting with a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.784232 4768 scope.go:117] "RemoveContainer" containerID="53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.784934 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6"} err="failed to get container status \"53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\": rpc error: code = NotFound desc = could not find container \"53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\": container with ID starting with 53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.784986 4768 scope.go:117] "RemoveContainer" containerID="16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.785251 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369"} err="failed to get container status \"16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\": rpc error: code = NotFound desc = could not find container \"16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\": container with ID starting with 16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.785301 4768 scope.go:117] "RemoveContainer" containerID="a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.785629 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574"} err="failed to get container status \"a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\": rpc error: code = NotFound desc = could not find container \"a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\": container with ID starting with a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.785652 4768 scope.go:117] "RemoveContainer" containerID="a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.785911 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831"} err="failed to get container status \"a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831\": rpc error: code = NotFound desc = could not find container \"a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831\": container with ID starting with a51960611bd12f0c58bd54acae15f7d2bf604e67c56e9a6eae537e238c236831 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.785929 4768 scope.go:117] "RemoveContainer" containerID="e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.786843 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006"} err="failed to get container status \"e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\": rpc error: code = NotFound desc = could not find container \"e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006\": container with ID starting with e9c12b2fecf5921bfb3ce53216e514cd9506baf6189354bee23cddaa5d8d3006 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.786861 4768 scope.go:117] "RemoveContainer" containerID="a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.787188 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f"} err="failed to get container status \"a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\": rpc error: code = NotFound desc = could not find container \"a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f\": container with ID starting with a8a70c7dc5484940dc1103d8398d226f7db8c0e1b1a7a109d24e746aa824931f not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.787206 4768 scope.go:117] "RemoveContainer" containerID="1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.787438 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3"} err="failed to get container status \"1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\": rpc error: code = NotFound desc = could not find container \"1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3\": container with ID starting with 1b8f41d6cd3dd71aec957753beb6f1beaeee0e31d2ac33b47464b683b46139d3 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.787456 4768 scope.go:117] "RemoveContainer" containerID="7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.787673 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec"} err="failed to get container status \"7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\": rpc error: code = NotFound desc = could not find container \"7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec\": container with ID starting with 7aab0ecc1f5cd05af42c4cf65dbdf1faf29a58aa0ae321033dce25b8931c5aec not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.787691 4768 scope.go:117] "RemoveContainer" containerID="a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.787933 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8"} err="failed to get container status \"a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\": rpc error: code = NotFound desc = could not find container \"a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8\": container with ID starting with a549018707591a160fe4933e2b7122adf94b3848e755270793d5c25abbbdc6d8 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.787946 4768 scope.go:117] "RemoveContainer" containerID="53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.788168 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6"} err="failed to get container status \"53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\": rpc error: code = NotFound desc = could not find container \"53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6\": container with ID starting with 53af64e6fc4617d20bdb0720c7407ddf479df20d904f80471a99452afbcbbdc6 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.788195 4768 scope.go:117] "RemoveContainer" containerID="16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.788403 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369"} err="failed to get container status \"16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\": rpc error: code = NotFound desc = could not find container \"16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369\": container with ID starting with 16857ee25250b99bdbf3b9b4952426f4ffc5b7123164da385112f1d017b3e369 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.788414 4768 scope.go:117] "RemoveContainer" containerID="a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.788639 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574"} err="failed to get container status \"a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\": rpc error: code = NotFound desc = could not find container \"a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574\": container with ID starting with a09771e5c0cbfb31c68cef0f57e37ced9b27e315ae8ac21d1939c3049a6fa574 not found: ID does not exist" Nov 24 17:59:45 crc kubenswrapper[4768]: I1124 17:59:45.906076 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb" path="/var/lib/kubelet/pods/938bbdd8-09f5-44f8-a9a5-3b13c0f8a2cb/volumes" Nov 24 17:59:46 crc kubenswrapper[4768]: I1124 17:59:46.595085 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" event={"ID":"e29728ac-3cb7-4a0e-b673-558743c3af88","Type":"ContainerStarted","Data":"b205fe6a122fb1a128ddd1f8de92e7df9d0fd5b5d3a5068eb3ad648dbd14e33b"} Nov 24 17:59:46 crc kubenswrapper[4768]: I1124 17:59:46.595649 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" event={"ID":"e29728ac-3cb7-4a0e-b673-558743c3af88","Type":"ContainerStarted","Data":"d40a724e4771817378f61720b2b4b2143723d219bd73198cae1d095ae30b4dd1"} Nov 24 17:59:46 crc kubenswrapper[4768]: I1124 17:59:46.595666 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" event={"ID":"e29728ac-3cb7-4a0e-b673-558743c3af88","Type":"ContainerStarted","Data":"958a2b2b8491c193310dcf7bd5fc26c9428978b59472ebfde700d37cb635092e"} Nov 24 17:59:46 crc kubenswrapper[4768]: I1124 17:59:46.595677 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" event={"ID":"e29728ac-3cb7-4a0e-b673-558743c3af88","Type":"ContainerStarted","Data":"ef3bd5c3367099ee57f67f936a89bd706bc866486533476301dc2bbdaf86b769"} Nov 24 17:59:46 crc kubenswrapper[4768]: I1124 17:59:46.595688 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" event={"ID":"e29728ac-3cb7-4a0e-b673-558743c3af88","Type":"ContainerStarted","Data":"7e32ee2d66984d270930da1794d4e5dba9cf4887c9afdecc1ca804832e6f3884"} Nov 24 17:59:46 crc kubenswrapper[4768]: I1124 17:59:46.595698 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" event={"ID":"e29728ac-3cb7-4a0e-b673-558743c3af88","Type":"ContainerStarted","Data":"79c2f161c54a9f72906df44c551ad0d8a20e7bbb96c163e503da687d81a6b68d"} Nov 24 17:59:48 crc kubenswrapper[4768]: I1124 17:59:48.613928 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" event={"ID":"e29728ac-3cb7-4a0e-b673-558743c3af88","Type":"ContainerStarted","Data":"c887615acce5fe6d1e909f0eaf687be1fa313d641b754515f2551458fd585a33"} Nov 24 17:59:51 crc kubenswrapper[4768]: I1124 17:59:51.633864 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" event={"ID":"e29728ac-3cb7-4a0e-b673-558743c3af88","Type":"ContainerStarted","Data":"c5dd477078c1cced710f824c16faf61a85e31819b35c9d9aaa6f8920634f0c01"} Nov 24 17:59:51 crc kubenswrapper[4768]: I1124 17:59:51.634479 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:51 crc kubenswrapper[4768]: I1124 17:59:51.634518 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:51 crc kubenswrapper[4768]: I1124 17:59:51.657734 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:51 crc kubenswrapper[4768]: I1124 17:59:51.663234 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" podStartSLOduration=7.663216387 podStartE2EDuration="7.663216387s" podCreationTimestamp="2025-11-24 17:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:59:51.659013696 +0000 UTC m=+630.519595483" watchObservedRunningTime="2025-11-24 17:59:51.663216387 +0000 UTC m=+630.523798164" Nov 24 17:59:52 crc kubenswrapper[4768]: I1124 17:59:52.639874 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:52 crc kubenswrapper[4768]: I1124 17:59:52.684847 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 17:59:59 crc kubenswrapper[4768]: I1124 17:59:59.898265 4768 scope.go:117] "RemoveContainer" containerID="7cd36c7ee341731a5eab683195734326510c57c98fea98906e0139f89383ce09" Nov 24 17:59:59 crc kubenswrapper[4768]: E1124 17:59:59.898822 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-vssnl_openshift-multus(895270a4-4f6a-4be4-9701-8a0f9cbf73d7)\"" pod="openshift-multus/multus-vssnl" podUID="895270a4-4f6a-4be4-9701-8a0f9cbf73d7" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.134056 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9"] Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.135174 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.137730 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.137998 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.145307 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9"] Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.147951 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-secret-volume\") pod \"collect-profiles-29400120-gfmb9\" (UID: \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.148039 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp8sc\" (UniqueName: \"kubernetes.io/projected/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-kube-api-access-sp8sc\") pod \"collect-profiles-29400120-gfmb9\" (UID: \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.148132 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-config-volume\") pod \"collect-profiles-29400120-gfmb9\" (UID: \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.249463 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp8sc\" (UniqueName: \"kubernetes.io/projected/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-kube-api-access-sp8sc\") pod \"collect-profiles-29400120-gfmb9\" (UID: \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.249684 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-config-volume\") pod \"collect-profiles-29400120-gfmb9\" (UID: \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.249788 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-secret-volume\") pod \"collect-profiles-29400120-gfmb9\" (UID: \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.251194 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-config-volume\") pod \"collect-profiles-29400120-gfmb9\" (UID: \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.257057 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-secret-volume\") pod \"collect-profiles-29400120-gfmb9\" (UID: \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.268114 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp8sc\" (UniqueName: \"kubernetes.io/projected/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-kube-api-access-sp8sc\") pod \"collect-profiles-29400120-gfmb9\" (UID: \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.453823 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: E1124 18:00:00.479989 4768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager_6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e_0(95c4d8c26f982a4499ff2ce94a56761f1234dc61bc28d6a1466d6942ff75e548): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 18:00:00 crc kubenswrapper[4768]: E1124 18:00:00.480068 4768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager_6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e_0(95c4d8c26f982a4499ff2ce94a56761f1234dc61bc28d6a1466d6942ff75e548): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: E1124 18:00:00.480093 4768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager_6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e_0(95c4d8c26f982a4499ff2ce94a56761f1234dc61bc28d6a1466d6942ff75e548): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: E1124 18:00:00.480144 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager(6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager(6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager_6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e_0(95c4d8c26f982a4499ff2ce94a56761f1234dc61bc28d6a1466d6942ff75e548): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" podUID="6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.686352 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: I1124 18:00:00.686759 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: E1124 18:00:00.709429 4768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager_6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e_0(6bbbf6036925b79c928039bae8b4b1f60712411e5678426ded95f01bd9e4a2c3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 18:00:00 crc kubenswrapper[4768]: E1124 18:00:00.709512 4768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager_6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e_0(6bbbf6036925b79c928039bae8b4b1f60712411e5678426ded95f01bd9e4a2c3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: E1124 18:00:00.709537 4768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager_6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e_0(6bbbf6036925b79c928039bae8b4b1f60712411e5678426ded95f01bd9e4a2c3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:00 crc kubenswrapper[4768]: E1124 18:00:00.709606 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager(6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager(6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager_6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e_0(6bbbf6036925b79c928039bae8b4b1f60712411e5678426ded95f01bd9e4a2c3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" podUID="6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e" Nov 24 18:00:12 crc kubenswrapper[4768]: I1124 18:00:12.898157 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:12 crc kubenswrapper[4768]: I1124 18:00:12.899694 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:12 crc kubenswrapper[4768]: E1124 18:00:12.935926 4768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager_6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e_0(a46f89e4658a3165a71b57fd3c55fe6e3ad292a4d593375f759433a54e488ac9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 18:00:12 crc kubenswrapper[4768]: E1124 18:00:12.936372 4768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager_6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e_0(a46f89e4658a3165a71b57fd3c55fe6e3ad292a4d593375f759433a54e488ac9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:12 crc kubenswrapper[4768]: E1124 18:00:12.936400 4768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager_6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e_0(a46f89e4658a3165a71b57fd3c55fe6e3ad292a4d593375f759433a54e488ac9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:12 crc kubenswrapper[4768]: E1124 18:00:12.936458 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager(6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager(6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29400120-gfmb9_openshift-operator-lifecycle-manager_6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e_0(a46f89e4658a3165a71b57fd3c55fe6e3ad292a4d593375f759433a54e488ac9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" podUID="6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e" Nov 24 18:00:14 crc kubenswrapper[4768]: I1124 18:00:14.898941 4768 scope.go:117] "RemoveContainer" containerID="7cd36c7ee341731a5eab683195734326510c57c98fea98906e0139f89383ce09" Nov 24 18:00:15 crc kubenswrapper[4768]: I1124 18:00:15.214388 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qmrps" Nov 24 18:00:15 crc kubenswrapper[4768]: I1124 18:00:15.780160 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vssnl_895270a4-4f6a-4be4-9701-8a0f9cbf73d7/kube-multus/2.log" Nov 24 18:00:15 crc kubenswrapper[4768]: I1124 18:00:15.780519 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vssnl" event={"ID":"895270a4-4f6a-4be4-9701-8a0f9cbf73d7","Type":"ContainerStarted","Data":"fae9e8eef63e6e2448e3bfe23d92101f71d284147e8e550c8bc77aaab0b070ab"} Nov 24 18:00:23 crc kubenswrapper[4768]: I1124 18:00:23.897950 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:23 crc kubenswrapper[4768]: I1124 18:00:23.899811 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:24 crc kubenswrapper[4768]: I1124 18:00:24.106287 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9"] Nov 24 18:00:24 crc kubenswrapper[4768]: W1124 18:00:24.115235 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f5b4be5_f22d_4371_b8bb_ad4c61f5f29e.slice/crio-58eac312fcfe0d94d5563156a317ce879b3c7cf6313a46289bb26f920d10ace3 WatchSource:0}: Error finding container 58eac312fcfe0d94d5563156a317ce879b3c7cf6313a46289bb26f920d10ace3: Status 404 returned error can't find the container with id 58eac312fcfe0d94d5563156a317ce879b3c7cf6313a46289bb26f920d10ace3 Nov 24 18:00:24 crc kubenswrapper[4768]: I1124 18:00:24.836866 4768 generic.go:334] "Generic (PLEG): container finished" podID="6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e" containerID="cb909740952a161b457634ca02e7f3bd50236f6792045ef3c443c4c3877a5c9e" exitCode=0 Nov 24 18:00:24 crc kubenswrapper[4768]: I1124 18:00:24.836930 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" event={"ID":"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e","Type":"ContainerDied","Data":"cb909740952a161b457634ca02e7f3bd50236f6792045ef3c443c4c3877a5c9e"} Nov 24 18:00:24 crc kubenswrapper[4768]: I1124 18:00:24.837191 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" event={"ID":"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e","Type":"ContainerStarted","Data":"58eac312fcfe0d94d5563156a317ce879b3c7cf6313a46289bb26f920d10ace3"} Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.628106 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg"] Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.629194 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.631305 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.641407 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg"] Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.802855 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1624fb3c-139b-48e7-9b52-36f82ffacfa6-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg\" (UID: \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.803092 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cwsh\" (UniqueName: \"kubernetes.io/projected/1624fb3c-139b-48e7-9b52-36f82ffacfa6-kube-api-access-8cwsh\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg\" (UID: \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.803186 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1624fb3c-139b-48e7-9b52-36f82ffacfa6-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg\" (UID: \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.904204 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1624fb3c-139b-48e7-9b52-36f82ffacfa6-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg\" (UID: \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.904444 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cwsh\" (UniqueName: \"kubernetes.io/projected/1624fb3c-139b-48e7-9b52-36f82ffacfa6-kube-api-access-8cwsh\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg\" (UID: \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.904590 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1624fb3c-139b-48e7-9b52-36f82ffacfa6-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg\" (UID: \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.904790 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1624fb3c-139b-48e7-9b52-36f82ffacfa6-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg\" (UID: \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.905200 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1624fb3c-139b-48e7-9b52-36f82ffacfa6-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg\" (UID: \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.928887 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cwsh\" (UniqueName: \"kubernetes.io/projected/1624fb3c-139b-48e7-9b52-36f82ffacfa6-kube-api-access-8cwsh\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg\" (UID: \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:25 crc kubenswrapper[4768]: I1124 18:00:25.943772 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.096973 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.141012 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg"] Nov 24 18:00:26 crc kubenswrapper[4768]: W1124 18:00:26.151099 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1624fb3c_139b_48e7_9b52_36f82ffacfa6.slice/crio-95c41ea0f629ae46f439ac254bafdd287c275aad319a79a94c4ab9db8f86c3d8 WatchSource:0}: Error finding container 95c41ea0f629ae46f439ac254bafdd287c275aad319a79a94c4ab9db8f86c3d8: Status 404 returned error can't find the container with id 95c41ea0f629ae46f439ac254bafdd287c275aad319a79a94c4ab9db8f86c3d8 Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.207698 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp8sc\" (UniqueName: \"kubernetes.io/projected/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-kube-api-access-sp8sc\") pod \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\" (UID: \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\") " Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.207797 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-secret-volume\") pod \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\" (UID: \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\") " Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.207870 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-config-volume\") pod \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\" (UID: \"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e\") " Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.209516 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-config-volume" (OuterVolumeSpecName: "config-volume") pod "6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e" (UID: "6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.212165 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-kube-api-access-sp8sc" (OuterVolumeSpecName: "kube-api-access-sp8sc") pod "6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e" (UID: "6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e"). InnerVolumeSpecName "kube-api-access-sp8sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.213599 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e" (UID: "6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.309590 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp8sc\" (UniqueName: \"kubernetes.io/projected/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-kube-api-access-sp8sc\") on node \"crc\" DevicePath \"\"" Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.309624 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.309636 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.861428 4768 generic.go:334] "Generic (PLEG): container finished" podID="1624fb3c-139b-48e7-9b52-36f82ffacfa6" containerID="af7ff8bf37f5550a30459031a1be5739da70d9ed2cda86c96633dfc8ac20ce73" exitCode=0 Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.861673 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" event={"ID":"1624fb3c-139b-48e7-9b52-36f82ffacfa6","Type":"ContainerDied","Data":"af7ff8bf37f5550a30459031a1be5739da70d9ed2cda86c96633dfc8ac20ce73"} Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.863061 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" event={"ID":"1624fb3c-139b-48e7-9b52-36f82ffacfa6","Type":"ContainerStarted","Data":"95c41ea0f629ae46f439ac254bafdd287c275aad319a79a94c4ab9db8f86c3d8"} Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.867346 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" event={"ID":"6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e","Type":"ContainerDied","Data":"58eac312fcfe0d94d5563156a317ce879b3c7cf6313a46289bb26f920d10ace3"} Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.867504 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58eac312fcfe0d94d5563156a317ce879b3c7cf6313a46289bb26f920d10ace3" Nov 24 18:00:26 crc kubenswrapper[4768]: I1124 18:00:26.867691 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9" Nov 24 18:00:28 crc kubenswrapper[4768]: I1124 18:00:28.883936 4768 generic.go:334] "Generic (PLEG): container finished" podID="1624fb3c-139b-48e7-9b52-36f82ffacfa6" containerID="810b09b347364fc28ff5a045eedbc775b3330d451d0412fe2bf2fbb294ef5238" exitCode=0 Nov 24 18:00:28 crc kubenswrapper[4768]: I1124 18:00:28.884037 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" event={"ID":"1624fb3c-139b-48e7-9b52-36f82ffacfa6","Type":"ContainerDied","Data":"810b09b347364fc28ff5a045eedbc775b3330d451d0412fe2bf2fbb294ef5238"} Nov 24 18:00:29 crc kubenswrapper[4768]: I1124 18:00:29.894432 4768 generic.go:334] "Generic (PLEG): container finished" podID="1624fb3c-139b-48e7-9b52-36f82ffacfa6" containerID="b8bba91495d34dd09319562b35ad460208fff032f2499c0fe3c8d866fd16c3ae" exitCode=0 Nov 24 18:00:29 crc kubenswrapper[4768]: I1124 18:00:29.894592 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" event={"ID":"1624fb3c-139b-48e7-9b52-36f82ffacfa6","Type":"ContainerDied","Data":"b8bba91495d34dd09319562b35ad460208fff032f2499c0fe3c8d866fd16c3ae"} Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.193602 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.380440 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1624fb3c-139b-48e7-9b52-36f82ffacfa6-util\") pod \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\" (UID: \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\") " Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.380648 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1624fb3c-139b-48e7-9b52-36f82ffacfa6-bundle\") pod \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\" (UID: \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\") " Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.380690 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cwsh\" (UniqueName: \"kubernetes.io/projected/1624fb3c-139b-48e7-9b52-36f82ffacfa6-kube-api-access-8cwsh\") pod \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\" (UID: \"1624fb3c-139b-48e7-9b52-36f82ffacfa6\") " Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.381341 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1624fb3c-139b-48e7-9b52-36f82ffacfa6-bundle" (OuterVolumeSpecName: "bundle") pod "1624fb3c-139b-48e7-9b52-36f82ffacfa6" (UID: "1624fb3c-139b-48e7-9b52-36f82ffacfa6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.387100 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1624fb3c-139b-48e7-9b52-36f82ffacfa6-kube-api-access-8cwsh" (OuterVolumeSpecName: "kube-api-access-8cwsh") pod "1624fb3c-139b-48e7-9b52-36f82ffacfa6" (UID: "1624fb3c-139b-48e7-9b52-36f82ffacfa6"). InnerVolumeSpecName "kube-api-access-8cwsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.400869 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1624fb3c-139b-48e7-9b52-36f82ffacfa6-util" (OuterVolumeSpecName: "util") pod "1624fb3c-139b-48e7-9b52-36f82ffacfa6" (UID: "1624fb3c-139b-48e7-9b52-36f82ffacfa6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.482164 4768 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1624fb3c-139b-48e7-9b52-36f82ffacfa6-util\") on node \"crc\" DevicePath \"\"" Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.482209 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cwsh\" (UniqueName: \"kubernetes.io/projected/1624fb3c-139b-48e7-9b52-36f82ffacfa6-kube-api-access-8cwsh\") on node \"crc\" DevicePath \"\"" Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.482223 4768 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1624fb3c-139b-48e7-9b52-36f82ffacfa6-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.916771 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" event={"ID":"1624fb3c-139b-48e7-9b52-36f82ffacfa6","Type":"ContainerDied","Data":"95c41ea0f629ae46f439ac254bafdd287c275aad319a79a94c4ab9db8f86c3d8"} Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.916835 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95c41ea0f629ae46f439ac254bafdd287c275aad319a79a94c4ab9db8f86c3d8" Nov 24 18:00:31 crc kubenswrapper[4768]: I1124 18:00:31.916881 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.357643 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-65z5p"] Nov 24 18:00:34 crc kubenswrapper[4768]: E1124 18:00:34.357909 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e" containerName="collect-profiles" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.357931 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e" containerName="collect-profiles" Nov 24 18:00:34 crc kubenswrapper[4768]: E1124 18:00:34.357960 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1624fb3c-139b-48e7-9b52-36f82ffacfa6" containerName="util" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.357972 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1624fb3c-139b-48e7-9b52-36f82ffacfa6" containerName="util" Nov 24 18:00:34 crc kubenswrapper[4768]: E1124 18:00:34.357992 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1624fb3c-139b-48e7-9b52-36f82ffacfa6" containerName="extract" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.358003 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1624fb3c-139b-48e7-9b52-36f82ffacfa6" containerName="extract" Nov 24 18:00:34 crc kubenswrapper[4768]: E1124 18:00:34.358024 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1624fb3c-139b-48e7-9b52-36f82ffacfa6" containerName="pull" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.358035 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1624fb3c-139b-48e7-9b52-36f82ffacfa6" containerName="pull" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.358153 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e" containerName="collect-profiles" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.358167 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1624fb3c-139b-48e7-9b52-36f82ffacfa6" containerName="extract" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.358661 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-65z5p" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.360680 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.364144 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.366199 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-24ndk" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.370696 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-65z5p"] Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.522973 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg2zh\" (UniqueName: \"kubernetes.io/projected/2de3be4f-3f3a-4789-ad93-341bc12f368e-kube-api-access-zg2zh\") pod \"nmstate-operator-557fdffb88-65z5p\" (UID: \"2de3be4f-3f3a-4789-ad93-341bc12f368e\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-65z5p" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.624181 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg2zh\" (UniqueName: \"kubernetes.io/projected/2de3be4f-3f3a-4789-ad93-341bc12f368e-kube-api-access-zg2zh\") pod \"nmstate-operator-557fdffb88-65z5p\" (UID: \"2de3be4f-3f3a-4789-ad93-341bc12f368e\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-65z5p" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.656371 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg2zh\" (UniqueName: \"kubernetes.io/projected/2de3be4f-3f3a-4789-ad93-341bc12f368e-kube-api-access-zg2zh\") pod \"nmstate-operator-557fdffb88-65z5p\" (UID: \"2de3be4f-3f3a-4789-ad93-341bc12f368e\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-65z5p" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.672940 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-65z5p" Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.850783 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-65z5p"] Nov 24 18:00:34 crc kubenswrapper[4768]: I1124 18:00:34.932415 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-65z5p" event={"ID":"2de3be4f-3f3a-4789-ad93-341bc12f368e","Type":"ContainerStarted","Data":"ce732cc379cbdef239d39a944027c48fd455c15266a79c52bbf0ff64753cdb1f"} Nov 24 18:00:36 crc kubenswrapper[4768]: I1124 18:00:36.946877 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-65z5p" event={"ID":"2de3be4f-3f3a-4789-ad93-341bc12f368e","Type":"ContainerStarted","Data":"0403ee5a527cafe2da010d588cb600726b80d7b840bb88ee142e72f97fb24de7"} Nov 24 18:00:36 crc kubenswrapper[4768]: I1124 18:00:36.962951 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-65z5p" podStartSLOduration=1.06813314 podStartE2EDuration="2.962928966s" podCreationTimestamp="2025-11-24 18:00:34 +0000 UTC" firstStartedPulling="2025-11-24 18:00:34.867248534 +0000 UTC m=+673.727830331" lastFinishedPulling="2025-11-24 18:00:36.76204438 +0000 UTC m=+675.622626157" observedRunningTime="2025-11-24 18:00:36.960081686 +0000 UTC m=+675.820663483" watchObservedRunningTime="2025-11-24 18:00:36.962928966 +0000 UTC m=+675.823510743" Nov 24 18:00:37 crc kubenswrapper[4768]: I1124 18:00:37.863356 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-676sm"] Nov 24 18:00:37 crc kubenswrapper[4768]: I1124 18:00:37.864738 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-676sm" Nov 24 18:00:37 crc kubenswrapper[4768]: I1124 18:00:37.867551 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-k7fwm" Nov 24 18:00:37 crc kubenswrapper[4768]: I1124 18:00:37.874830 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj"] Nov 24 18:00:37 crc kubenswrapper[4768]: I1124 18:00:37.876201 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" Nov 24 18:00:37 crc kubenswrapper[4768]: I1124 18:00:37.878413 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 18:00:37 crc kubenswrapper[4768]: I1124 18:00:37.880299 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-676sm"] Nov 24 18:00:37 crc kubenswrapper[4768]: I1124 18:00:37.906411 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj"] Nov 24 18:00:37 crc kubenswrapper[4768]: I1124 18:00:37.908469 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-qxtkj"] Nov 24 18:00:37 crc kubenswrapper[4768]: I1124 18:00:37.909382 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.030545 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz"] Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.031287 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.032970 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.033451 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-6rxcv" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.033823 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.050015 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz"] Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.063239 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ljh2\" (UniqueName: \"kubernetes.io/projected/822888f3-7b2d-48e4-a58e-42885dd6edf0-kube-api-access-8ljh2\") pod \"nmstate-metrics-5dcf9c57c5-676sm\" (UID: \"822888f3-7b2d-48e4-a58e-42885dd6edf0\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-676sm" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.063344 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6xzt\" (UniqueName: \"kubernetes.io/projected/70c8f860-b6e0-4407-bfd8-be567169db2c-kube-api-access-j6xzt\") pod \"nmstate-handler-qxtkj\" (UID: \"70c8f860-b6e0-4407-bfd8-be567169db2c\") " pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.063385 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/70c8f860-b6e0-4407-bfd8-be567169db2c-ovs-socket\") pod \"nmstate-handler-qxtkj\" (UID: \"70c8f860-b6e0-4407-bfd8-be567169db2c\") " pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.063427 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/204f91a8-34ab-4a27-96eb-1602cb1f1ed8-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-fdltj\" (UID: \"204f91a8-34ab-4a27-96eb-1602cb1f1ed8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.063455 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8szwd\" (UniqueName: \"kubernetes.io/projected/204f91a8-34ab-4a27-96eb-1602cb1f1ed8-kube-api-access-8szwd\") pod \"nmstate-webhook-6b89b748d8-fdltj\" (UID: \"204f91a8-34ab-4a27-96eb-1602cb1f1ed8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.063477 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/70c8f860-b6e0-4407-bfd8-be567169db2c-dbus-socket\") pod \"nmstate-handler-qxtkj\" (UID: \"70c8f860-b6e0-4407-bfd8-be567169db2c\") " pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.063507 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/70c8f860-b6e0-4407-bfd8-be567169db2c-nmstate-lock\") pod \"nmstate-handler-qxtkj\" (UID: \"70c8f860-b6e0-4407-bfd8-be567169db2c\") " pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165062 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq2g6\" (UniqueName: \"kubernetes.io/projected/07b3a9eb-7a3b-4f8c-b205-0becb2a0168b-kube-api-access-bq2g6\") pod \"nmstate-console-plugin-5874bd7bc5-hnwzz\" (UID: \"07b3a9eb-7a3b-4f8c-b205-0becb2a0168b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165164 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ljh2\" (UniqueName: \"kubernetes.io/projected/822888f3-7b2d-48e4-a58e-42885dd6edf0-kube-api-access-8ljh2\") pod \"nmstate-metrics-5dcf9c57c5-676sm\" (UID: \"822888f3-7b2d-48e4-a58e-42885dd6edf0\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-676sm" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165191 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/07b3a9eb-7a3b-4f8c-b205-0becb2a0168b-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-hnwzz\" (UID: \"07b3a9eb-7a3b-4f8c-b205-0becb2a0168b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165226 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/07b3a9eb-7a3b-4f8c-b205-0becb2a0168b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-hnwzz\" (UID: \"07b3a9eb-7a3b-4f8c-b205-0becb2a0168b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165256 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6xzt\" (UniqueName: \"kubernetes.io/projected/70c8f860-b6e0-4407-bfd8-be567169db2c-kube-api-access-j6xzt\") pod \"nmstate-handler-qxtkj\" (UID: \"70c8f860-b6e0-4407-bfd8-be567169db2c\") " pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165285 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/70c8f860-b6e0-4407-bfd8-be567169db2c-ovs-socket\") pod \"nmstate-handler-qxtkj\" (UID: \"70c8f860-b6e0-4407-bfd8-be567169db2c\") " pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165462 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/70c8f860-b6e0-4407-bfd8-be567169db2c-ovs-socket\") pod \"nmstate-handler-qxtkj\" (UID: \"70c8f860-b6e0-4407-bfd8-be567169db2c\") " pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165630 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/204f91a8-34ab-4a27-96eb-1602cb1f1ed8-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-fdltj\" (UID: \"204f91a8-34ab-4a27-96eb-1602cb1f1ed8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165667 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8szwd\" (UniqueName: \"kubernetes.io/projected/204f91a8-34ab-4a27-96eb-1602cb1f1ed8-kube-api-access-8szwd\") pod \"nmstate-webhook-6b89b748d8-fdltj\" (UID: \"204f91a8-34ab-4a27-96eb-1602cb1f1ed8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165693 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/70c8f860-b6e0-4407-bfd8-be567169db2c-dbus-socket\") pod \"nmstate-handler-qxtkj\" (UID: \"70c8f860-b6e0-4407-bfd8-be567169db2c\") " pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165715 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/70c8f860-b6e0-4407-bfd8-be567169db2c-nmstate-lock\") pod \"nmstate-handler-qxtkj\" (UID: \"70c8f860-b6e0-4407-bfd8-be567169db2c\") " pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165797 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/70c8f860-b6e0-4407-bfd8-be567169db2c-nmstate-lock\") pod \"nmstate-handler-qxtkj\" (UID: \"70c8f860-b6e0-4407-bfd8-be567169db2c\") " pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.165972 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/70c8f860-b6e0-4407-bfd8-be567169db2c-dbus-socket\") pod \"nmstate-handler-qxtkj\" (UID: \"70c8f860-b6e0-4407-bfd8-be567169db2c\") " pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.182312 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/204f91a8-34ab-4a27-96eb-1602cb1f1ed8-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-fdltj\" (UID: \"204f91a8-34ab-4a27-96eb-1602cb1f1ed8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.188216 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8szwd\" (UniqueName: \"kubernetes.io/projected/204f91a8-34ab-4a27-96eb-1602cb1f1ed8-kube-api-access-8szwd\") pod \"nmstate-webhook-6b89b748d8-fdltj\" (UID: \"204f91a8-34ab-4a27-96eb-1602cb1f1ed8\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.188365 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ljh2\" (UniqueName: \"kubernetes.io/projected/822888f3-7b2d-48e4-a58e-42885dd6edf0-kube-api-access-8ljh2\") pod \"nmstate-metrics-5dcf9c57c5-676sm\" (UID: \"822888f3-7b2d-48e4-a58e-42885dd6edf0\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-676sm" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.190598 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6xzt\" (UniqueName: \"kubernetes.io/projected/70c8f860-b6e0-4407-bfd8-be567169db2c-kube-api-access-j6xzt\") pod \"nmstate-handler-qxtkj\" (UID: \"70c8f860-b6e0-4407-bfd8-be567169db2c\") " pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.193576 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.236937 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6f98797f4b-wxwtc"] Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.238082 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.250711 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.251859 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f98797f4b-wxwtc"] Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.277282 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq2g6\" (UniqueName: \"kubernetes.io/projected/07b3a9eb-7a3b-4f8c-b205-0becb2a0168b-kube-api-access-bq2g6\") pod \"nmstate-console-plugin-5874bd7bc5-hnwzz\" (UID: \"07b3a9eb-7a3b-4f8c-b205-0becb2a0168b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.277361 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/07b3a9eb-7a3b-4f8c-b205-0becb2a0168b-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-hnwzz\" (UID: \"07b3a9eb-7a3b-4f8c-b205-0becb2a0168b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.277390 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/07b3a9eb-7a3b-4f8c-b205-0becb2a0168b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-hnwzz\" (UID: \"07b3a9eb-7a3b-4f8c-b205-0becb2a0168b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.278995 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/07b3a9eb-7a3b-4f8c-b205-0becb2a0168b-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-hnwzz\" (UID: \"07b3a9eb-7a3b-4f8c-b205-0becb2a0168b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.284244 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/07b3a9eb-7a3b-4f8c-b205-0becb2a0168b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-hnwzz\" (UID: \"07b3a9eb-7a3b-4f8c-b205-0becb2a0168b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" Nov 24 18:00:38 crc kubenswrapper[4768]: W1124 18:00:38.285576 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70c8f860_b6e0_4407_bfd8_be567169db2c.slice/crio-ccdce4a522d0c591684bd449dc5716c9377e6f40eff8e259e8422416fa1937d0 WatchSource:0}: Error finding container ccdce4a522d0c591684bd449dc5716c9377e6f40eff8e259e8422416fa1937d0: Status 404 returned error can't find the container with id ccdce4a522d0c591684bd449dc5716c9377e6f40eff8e259e8422416fa1937d0 Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.300952 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq2g6\" (UniqueName: \"kubernetes.io/projected/07b3a9eb-7a3b-4f8c-b205-0becb2a0168b-kube-api-access-bq2g6\") pod \"nmstate-console-plugin-5874bd7bc5-hnwzz\" (UID: \"07b3a9eb-7a3b-4f8c-b205-0becb2a0168b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.351775 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.378814 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7dfe0b7d-813e-4ae2-b042-4464e47835ea-console-oauth-config\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.378911 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7dfe0b7d-813e-4ae2-b042-4464e47835ea-service-ca\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.378936 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dfe0b7d-813e-4ae2-b042-4464e47835ea-trusted-ca-bundle\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.378962 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7dfe0b7d-813e-4ae2-b042-4464e47835ea-oauth-serving-cert\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.379002 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dfe0b7d-813e-4ae2-b042-4464e47835ea-console-serving-cert\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.379184 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l9xd\" (UniqueName: \"kubernetes.io/projected/7dfe0b7d-813e-4ae2-b042-4464e47835ea-kube-api-access-4l9xd\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.379217 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7dfe0b7d-813e-4ae2-b042-4464e47835ea-console-config\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.423305 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj"] Nov 24 18:00:38 crc kubenswrapper[4768]: W1124 18:00:38.428641 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod204f91a8_34ab_4a27_96eb_1602cb1f1ed8.slice/crio-342456f2816db373c223229bf558eb57517340c596bc8d96511dae6e60610f56 WatchSource:0}: Error finding container 342456f2816db373c223229bf558eb57517340c596bc8d96511dae6e60610f56: Status 404 returned error can't find the container with id 342456f2816db373c223229bf558eb57517340c596bc8d96511dae6e60610f56 Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.480221 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7dfe0b7d-813e-4ae2-b042-4464e47835ea-service-ca\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.480496 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dfe0b7d-813e-4ae2-b042-4464e47835ea-trusted-ca-bundle\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.480547 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7dfe0b7d-813e-4ae2-b042-4464e47835ea-oauth-serving-cert\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.480585 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dfe0b7d-813e-4ae2-b042-4464e47835ea-console-serving-cert\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.480612 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l9xd\" (UniqueName: \"kubernetes.io/projected/7dfe0b7d-813e-4ae2-b042-4464e47835ea-kube-api-access-4l9xd\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.480643 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7dfe0b7d-813e-4ae2-b042-4464e47835ea-console-config\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.480681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7dfe0b7d-813e-4ae2-b042-4464e47835ea-console-oauth-config\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.481372 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7dfe0b7d-813e-4ae2-b042-4464e47835ea-service-ca\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.480334 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-676sm" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.482558 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7dfe0b7d-813e-4ae2-b042-4464e47835ea-console-config\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.483368 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dfe0b7d-813e-4ae2-b042-4464e47835ea-trusted-ca-bundle\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.484670 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7dfe0b7d-813e-4ae2-b042-4464e47835ea-console-oauth-config\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.485123 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7dfe0b7d-813e-4ae2-b042-4464e47835ea-oauth-serving-cert\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.486683 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dfe0b7d-813e-4ae2-b042-4464e47835ea-console-serving-cert\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.498913 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l9xd\" (UniqueName: \"kubernetes.io/projected/7dfe0b7d-813e-4ae2-b042-4464e47835ea-kube-api-access-4l9xd\") pod \"console-6f98797f4b-wxwtc\" (UID: \"7dfe0b7d-813e-4ae2-b042-4464e47835ea\") " pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.539706 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz"] Nov 24 18:00:38 crc kubenswrapper[4768]: W1124 18:00:38.553671 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07b3a9eb_7a3b_4f8c_b205_0becb2a0168b.slice/crio-abd105bb3cb647f17cf682cb8cd14e97a451b83077e2c2fa438e4add1e0c2328 WatchSource:0}: Error finding container abd105bb3cb647f17cf682cb8cd14e97a451b83077e2c2fa438e4add1e0c2328: Status 404 returned error can't find the container with id abd105bb3cb647f17cf682cb8cd14e97a451b83077e2c2fa438e4add1e0c2328 Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.564803 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.678761 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-676sm"] Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.765018 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f98797f4b-wxwtc"] Nov 24 18:00:38 crc kubenswrapper[4768]: W1124 18:00:38.772253 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dfe0b7d_813e_4ae2_b042_4464e47835ea.slice/crio-6879dd01e8d8a1fd078731e348e27e1e9c7c77fa8d23f9d6730344081964ba9d WatchSource:0}: Error finding container 6879dd01e8d8a1fd078731e348e27e1e9c7c77fa8d23f9d6730344081964ba9d: Status 404 returned error can't find the container with id 6879dd01e8d8a1fd078731e348e27e1e9c7c77fa8d23f9d6730344081964ba9d Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.971045 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qxtkj" event={"ID":"70c8f860-b6e0-4407-bfd8-be567169db2c","Type":"ContainerStarted","Data":"ccdce4a522d0c591684bd449dc5716c9377e6f40eff8e259e8422416fa1937d0"} Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.972196 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" event={"ID":"07b3a9eb-7a3b-4f8c-b205-0becb2a0168b","Type":"ContainerStarted","Data":"abd105bb3cb647f17cf682cb8cd14e97a451b83077e2c2fa438e4add1e0c2328"} Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.974592 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f98797f4b-wxwtc" event={"ID":"7dfe0b7d-813e-4ae2-b042-4464e47835ea","Type":"ContainerStarted","Data":"1a7d14d7e73c2845a844106f3e54aa90a032e32d313b17a720d978941e3d3f34"} Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.974628 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f98797f4b-wxwtc" event={"ID":"7dfe0b7d-813e-4ae2-b042-4464e47835ea","Type":"ContainerStarted","Data":"6879dd01e8d8a1fd078731e348e27e1e9c7c77fa8d23f9d6730344081964ba9d"} Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.976752 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-676sm" event={"ID":"822888f3-7b2d-48e4-a58e-42885dd6edf0","Type":"ContainerStarted","Data":"e6afc450e93c63869d7ae49844fbf0dbedc593d51f20cb655cb597b7cd892ac0"} Nov 24 18:00:38 crc kubenswrapper[4768]: I1124 18:00:38.978581 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" event={"ID":"204f91a8-34ab-4a27-96eb-1602cb1f1ed8","Type":"ContainerStarted","Data":"342456f2816db373c223229bf558eb57517340c596bc8d96511dae6e60610f56"} Nov 24 18:00:39 crc kubenswrapper[4768]: I1124 18:00:38.999754 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6f98797f4b-wxwtc" podStartSLOduration=0.999735285 podStartE2EDuration="999.735285ms" podCreationTimestamp="2025-11-24 18:00:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:00:38.996655338 +0000 UTC m=+677.857237115" watchObservedRunningTime="2025-11-24 18:00:38.999735285 +0000 UTC m=+677.860317062" Nov 24 18:00:41 crc kubenswrapper[4768]: I1124 18:00:41.998397 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-676sm" event={"ID":"822888f3-7b2d-48e4-a58e-42885dd6edf0","Type":"ContainerStarted","Data":"1df5b286c887b4820f212c0c088aeb23b34ee050193c93c9f81054797df97123"} Nov 24 18:00:42 crc kubenswrapper[4768]: I1124 18:00:42.002572 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" event={"ID":"204f91a8-34ab-4a27-96eb-1602cb1f1ed8","Type":"ContainerStarted","Data":"adb577715562f755f8818885041c03cc9198238d46c078cd46d34acf325923e9"} Nov 24 18:00:42 crc kubenswrapper[4768]: I1124 18:00:42.002847 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" Nov 24 18:00:42 crc kubenswrapper[4768]: I1124 18:00:42.007347 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qxtkj" event={"ID":"70c8f860-b6e0-4407-bfd8-be567169db2c","Type":"ContainerStarted","Data":"8a23235ae54594454af7df43aa065cbf33f864de14f2c8191e997b84ff9f69aa"} Nov 24 18:00:42 crc kubenswrapper[4768]: I1124 18:00:42.010215 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" event={"ID":"07b3a9eb-7a3b-4f8c-b205-0becb2a0168b","Type":"ContainerStarted","Data":"5decba96a0d3aed2c3c0c3d47b8c0ae9bf8f0205f7802ab144d97fd10224cb3a"} Nov 24 18:00:42 crc kubenswrapper[4768]: I1124 18:00:42.023731 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" podStartSLOduration=1.9110880300000002 podStartE2EDuration="5.023706625s" podCreationTimestamp="2025-11-24 18:00:37 +0000 UTC" firstStartedPulling="2025-11-24 18:00:38.440106842 +0000 UTC m=+677.300688619" lastFinishedPulling="2025-11-24 18:00:41.552725437 +0000 UTC m=+680.413307214" observedRunningTime="2025-11-24 18:00:42.02070076 +0000 UTC m=+680.881282557" watchObservedRunningTime="2025-11-24 18:00:42.023706625 +0000 UTC m=+680.884288402" Nov 24 18:00:42 crc kubenswrapper[4768]: I1124 18:00:42.046793 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-qxtkj" podStartSLOduration=1.782703632 podStartE2EDuration="5.046770507s" podCreationTimestamp="2025-11-24 18:00:37 +0000 UTC" firstStartedPulling="2025-11-24 18:00:38.289120996 +0000 UTC m=+677.149702773" lastFinishedPulling="2025-11-24 18:00:41.553187871 +0000 UTC m=+680.413769648" observedRunningTime="2025-11-24 18:00:42.041698493 +0000 UTC m=+680.902280280" watchObservedRunningTime="2025-11-24 18:00:42.046770507 +0000 UTC m=+680.907352294" Nov 24 18:00:43 crc kubenswrapper[4768]: I1124 18:00:43.014041 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:44 crc kubenswrapper[4768]: I1124 18:00:44.020253 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-676sm" event={"ID":"822888f3-7b2d-48e4-a58e-42885dd6edf0","Type":"ContainerStarted","Data":"6e6c755c9498979a9bb84ad140b819f3469cd0e8ceaf72635a1c40791334fccf"} Nov 24 18:00:44 crc kubenswrapper[4768]: I1124 18:00:44.038178 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-hnwzz" podStartSLOduration=3.042158192 podStartE2EDuration="6.038158112s" podCreationTimestamp="2025-11-24 18:00:38 +0000 UTC" firstStartedPulling="2025-11-24 18:00:38.556835701 +0000 UTC m=+677.417417468" lastFinishedPulling="2025-11-24 18:00:41.552835611 +0000 UTC m=+680.413417388" observedRunningTime="2025-11-24 18:00:42.067636256 +0000 UTC m=+680.928218033" watchObservedRunningTime="2025-11-24 18:00:44.038158112 +0000 UTC m=+682.898739889" Nov 24 18:00:44 crc kubenswrapper[4768]: I1124 18:00:44.038787 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-676sm" podStartSLOduration=2.281620769 podStartE2EDuration="7.038780219s" podCreationTimestamp="2025-11-24 18:00:37 +0000 UTC" firstStartedPulling="2025-11-24 18:00:38.709506795 +0000 UTC m=+677.570088572" lastFinishedPulling="2025-11-24 18:00:43.466666245 +0000 UTC m=+682.327248022" observedRunningTime="2025-11-24 18:00:44.034948871 +0000 UTC m=+682.895530648" watchObservedRunningTime="2025-11-24 18:00:44.038780219 +0000 UTC m=+682.899361996" Nov 24 18:00:48 crc kubenswrapper[4768]: I1124 18:00:48.284815 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-qxtkj" Nov 24 18:00:48 crc kubenswrapper[4768]: I1124 18:00:48.565860 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:48 crc kubenswrapper[4768]: I1124 18:00:48.565932 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:48 crc kubenswrapper[4768]: I1124 18:00:48.571449 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:49 crc kubenswrapper[4768]: I1124 18:00:49.055267 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6f98797f4b-wxwtc" Nov 24 18:00:49 crc kubenswrapper[4768]: I1124 18:00:49.118354 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tj982"] Nov 24 18:00:58 crc kubenswrapper[4768]: I1124 18:00:58.201111 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-fdltj" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.008048 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588"] Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.011254 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.013062 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.020627 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588"] Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.036498 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppxll\" (UniqueName: \"kubernetes.io/projected/57e06364-1ec6-4ed6-b123-c52044bd3adb-kube-api-access-ppxll\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588\" (UID: \"57e06364-1ec6-4ed6-b123-c52044bd3adb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.036768 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57e06364-1ec6-4ed6-b123-c52044bd3adb-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588\" (UID: \"57e06364-1ec6-4ed6-b123-c52044bd3adb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.036910 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57e06364-1ec6-4ed6-b123-c52044bd3adb-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588\" (UID: \"57e06364-1ec6-4ed6-b123-c52044bd3adb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.138250 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppxll\" (UniqueName: \"kubernetes.io/projected/57e06364-1ec6-4ed6-b123-c52044bd3adb-kube-api-access-ppxll\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588\" (UID: \"57e06364-1ec6-4ed6-b123-c52044bd3adb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.138332 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57e06364-1ec6-4ed6-b123-c52044bd3adb-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588\" (UID: \"57e06364-1ec6-4ed6-b123-c52044bd3adb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.138369 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57e06364-1ec6-4ed6-b123-c52044bd3adb-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588\" (UID: \"57e06364-1ec6-4ed6-b123-c52044bd3adb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.138794 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57e06364-1ec6-4ed6-b123-c52044bd3adb-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588\" (UID: \"57e06364-1ec6-4ed6-b123-c52044bd3adb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.139225 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57e06364-1ec6-4ed6-b123-c52044bd3adb-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588\" (UID: \"57e06364-1ec6-4ed6-b123-c52044bd3adb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.158922 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppxll\" (UniqueName: \"kubernetes.io/projected/57e06364-1ec6-4ed6-b123-c52044bd3adb-kube-api-access-ppxll\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588\" (UID: \"57e06364-1ec6-4ed6-b123-c52044bd3adb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.331719 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:11 crc kubenswrapper[4768]: I1124 18:01:11.779615 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588"] Nov 24 18:01:12 crc kubenswrapper[4768]: I1124 18:01:12.198089 4768 generic.go:334] "Generic (PLEG): container finished" podID="57e06364-1ec6-4ed6-b123-c52044bd3adb" containerID="a813b8653797eef231b1943adee3fb221947ce1251930046d60a36d37db3ced0" exitCode=0 Nov 24 18:01:12 crc kubenswrapper[4768]: I1124 18:01:12.198391 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" event={"ID":"57e06364-1ec6-4ed6-b123-c52044bd3adb","Type":"ContainerDied","Data":"a813b8653797eef231b1943adee3fb221947ce1251930046d60a36d37db3ced0"} Nov 24 18:01:12 crc kubenswrapper[4768]: I1124 18:01:12.198690 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" event={"ID":"57e06364-1ec6-4ed6-b123-c52044bd3adb","Type":"ContainerStarted","Data":"f2ceb6223a436484adb7755fe80e99b501f37cfc95e27a6ee4680495a706238c"} Nov 24 18:01:13 crc kubenswrapper[4768]: I1124 18:01:13.657225 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:01:13 crc kubenswrapper[4768]: I1124 18:01:13.657818 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.181791 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-tj982" podUID="920a0317-09dd-43e5-b5a9-11feb6d3b37d" containerName="console" containerID="cri-o://6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179" gracePeriod=15 Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.212678 4768 generic.go:334] "Generic (PLEG): container finished" podID="57e06364-1ec6-4ed6-b123-c52044bd3adb" containerID="348c80ef1d7cde85be6372332a6ec3f1e064edc4f4c398416492497dda70a66e" exitCode=0 Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.212733 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" event={"ID":"57e06364-1ec6-4ed6-b123-c52044bd3adb","Type":"ContainerDied","Data":"348c80ef1d7cde85be6372332a6ec3f1e064edc4f4c398416492497dda70a66e"} Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.570123 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tj982_920a0317-09dd-43e5-b5a9-11feb6d3b37d/console/0.log" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.570197 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tj982" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.587780 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-oauth-serving-cert\") pod \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.587825 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-trusted-ca-bundle\") pod \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.587901 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s674\" (UniqueName: \"kubernetes.io/projected/920a0317-09dd-43e5-b5a9-11feb6d3b37d-kube-api-access-6s674\") pod \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.587942 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-service-ca\") pod \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.587983 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-serving-cert\") pod \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.588058 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-config\") pod \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.588098 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-oauth-config\") pod \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\" (UID: \"920a0317-09dd-43e5-b5a9-11feb6d3b37d\") " Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.588804 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "920a0317-09dd-43e5-b5a9-11feb6d3b37d" (UID: "920a0317-09dd-43e5-b5a9-11feb6d3b37d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.588825 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "920a0317-09dd-43e5-b5a9-11feb6d3b37d" (UID: "920a0317-09dd-43e5-b5a9-11feb6d3b37d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.588813 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-service-ca" (OuterVolumeSpecName: "service-ca") pod "920a0317-09dd-43e5-b5a9-11feb6d3b37d" (UID: "920a0317-09dd-43e5-b5a9-11feb6d3b37d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.588837 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-config" (OuterVolumeSpecName: "console-config") pod "920a0317-09dd-43e5-b5a9-11feb6d3b37d" (UID: "920a0317-09dd-43e5-b5a9-11feb6d3b37d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.594735 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/920a0317-09dd-43e5-b5a9-11feb6d3b37d-kube-api-access-6s674" (OuterVolumeSpecName: "kube-api-access-6s674") pod "920a0317-09dd-43e5-b5a9-11feb6d3b37d" (UID: "920a0317-09dd-43e5-b5a9-11feb6d3b37d"). InnerVolumeSpecName "kube-api-access-6s674". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.595018 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "920a0317-09dd-43e5-b5a9-11feb6d3b37d" (UID: "920a0317-09dd-43e5-b5a9-11feb6d3b37d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.595052 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "920a0317-09dd-43e5-b5a9-11feb6d3b37d" (UID: "920a0317-09dd-43e5-b5a9-11feb6d3b37d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.690168 4768 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.690231 4768 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.690254 4768 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.690272 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.690290 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s674\" (UniqueName: \"kubernetes.io/projected/920a0317-09dd-43e5-b5a9-11feb6d3b37d-kube-api-access-6s674\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.690311 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/920a0317-09dd-43e5-b5a9-11feb6d3b37d-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:14 crc kubenswrapper[4768]: I1124 18:01:14.690328 4768 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/920a0317-09dd-43e5-b5a9-11feb6d3b37d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.222443 4768 generic.go:334] "Generic (PLEG): container finished" podID="57e06364-1ec6-4ed6-b123-c52044bd3adb" containerID="55108b70d0f5ffb12abd49f409921ff9ba5591e49199560e1130aa6bf9067b8e" exitCode=0 Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.222543 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" event={"ID":"57e06364-1ec6-4ed6-b123-c52044bd3adb","Type":"ContainerDied","Data":"55108b70d0f5ffb12abd49f409921ff9ba5591e49199560e1130aa6bf9067b8e"} Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.226001 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tj982_920a0317-09dd-43e5-b5a9-11feb6d3b37d/console/0.log" Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.226087 4768 generic.go:334] "Generic (PLEG): container finished" podID="920a0317-09dd-43e5-b5a9-11feb6d3b37d" containerID="6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179" exitCode=2 Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.226141 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tj982" event={"ID":"920a0317-09dd-43e5-b5a9-11feb6d3b37d","Type":"ContainerDied","Data":"6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179"} Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.226195 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tj982" event={"ID":"920a0317-09dd-43e5-b5a9-11feb6d3b37d","Type":"ContainerDied","Data":"89b6eea92acbe274aa5ec5dd37fbc85a0397147f838d568dfe02011bbcbfcf06"} Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.226225 4768 scope.go:117] "RemoveContainer" containerID="6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179" Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.226262 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tj982" Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.255925 4768 scope.go:117] "RemoveContainer" containerID="6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179" Nov 24 18:01:15 crc kubenswrapper[4768]: E1124 18:01:15.256900 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179\": container with ID starting with 6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179 not found: ID does not exist" containerID="6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179" Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.256954 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179"} err="failed to get container status \"6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179\": rpc error: code = NotFound desc = could not find container \"6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179\": container with ID starting with 6adf2ac8b5a437c712c737c165f4b78390e1d43a7f613050e92713a7e3a00179 not found: ID does not exist" Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.274925 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tj982"] Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.279281 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-tj982"] Nov 24 18:01:15 crc kubenswrapper[4768]: I1124 18:01:15.907224 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="920a0317-09dd-43e5-b5a9-11feb6d3b37d" path="/var/lib/kubelet/pods/920a0317-09dd-43e5-b5a9-11feb6d3b37d/volumes" Nov 24 18:01:16 crc kubenswrapper[4768]: I1124 18:01:16.455272 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:16 crc kubenswrapper[4768]: I1124 18:01:16.517031 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppxll\" (UniqueName: \"kubernetes.io/projected/57e06364-1ec6-4ed6-b123-c52044bd3adb-kube-api-access-ppxll\") pod \"57e06364-1ec6-4ed6-b123-c52044bd3adb\" (UID: \"57e06364-1ec6-4ed6-b123-c52044bd3adb\") " Nov 24 18:01:16 crc kubenswrapper[4768]: I1124 18:01:16.517159 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57e06364-1ec6-4ed6-b123-c52044bd3adb-bundle\") pod \"57e06364-1ec6-4ed6-b123-c52044bd3adb\" (UID: \"57e06364-1ec6-4ed6-b123-c52044bd3adb\") " Nov 24 18:01:16 crc kubenswrapper[4768]: I1124 18:01:16.517206 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57e06364-1ec6-4ed6-b123-c52044bd3adb-util\") pod \"57e06364-1ec6-4ed6-b123-c52044bd3adb\" (UID: \"57e06364-1ec6-4ed6-b123-c52044bd3adb\") " Nov 24 18:01:16 crc kubenswrapper[4768]: I1124 18:01:16.518909 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e06364-1ec6-4ed6-b123-c52044bd3adb-bundle" (OuterVolumeSpecName: "bundle") pod "57e06364-1ec6-4ed6-b123-c52044bd3adb" (UID: "57e06364-1ec6-4ed6-b123-c52044bd3adb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:01:16 crc kubenswrapper[4768]: I1124 18:01:16.521133 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e06364-1ec6-4ed6-b123-c52044bd3adb-kube-api-access-ppxll" (OuterVolumeSpecName: "kube-api-access-ppxll") pod "57e06364-1ec6-4ed6-b123-c52044bd3adb" (UID: "57e06364-1ec6-4ed6-b123-c52044bd3adb"). InnerVolumeSpecName "kube-api-access-ppxll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:01:16 crc kubenswrapper[4768]: I1124 18:01:16.530260 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e06364-1ec6-4ed6-b123-c52044bd3adb-util" (OuterVolumeSpecName: "util") pod "57e06364-1ec6-4ed6-b123-c52044bd3adb" (UID: "57e06364-1ec6-4ed6-b123-c52044bd3adb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:01:16 crc kubenswrapper[4768]: I1124 18:01:16.619231 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppxll\" (UniqueName: \"kubernetes.io/projected/57e06364-1ec6-4ed6-b123-c52044bd3adb-kube-api-access-ppxll\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:16 crc kubenswrapper[4768]: I1124 18:01:16.619266 4768 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57e06364-1ec6-4ed6-b123-c52044bd3adb-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:16 crc kubenswrapper[4768]: I1124 18:01:16.619275 4768 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57e06364-1ec6-4ed6-b123-c52044bd3adb-util\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:17 crc kubenswrapper[4768]: I1124 18:01:17.242104 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" event={"ID":"57e06364-1ec6-4ed6-b123-c52044bd3adb","Type":"ContainerDied","Data":"f2ceb6223a436484adb7755fe80e99b501f37cfc95e27a6ee4680495a706238c"} Nov 24 18:01:17 crc kubenswrapper[4768]: I1124 18:01:17.242144 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2ceb6223a436484adb7755fe80e99b501f37cfc95e27a6ee4680495a706238c" Nov 24 18:01:17 crc kubenswrapper[4768]: I1124 18:01:17.242175 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.546094 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q"] Nov 24 18:01:25 crc kubenswrapper[4768]: E1124 18:01:25.546914 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e06364-1ec6-4ed6-b123-c52044bd3adb" containerName="pull" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.546930 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e06364-1ec6-4ed6-b123-c52044bd3adb" containerName="pull" Nov 24 18:01:25 crc kubenswrapper[4768]: E1124 18:01:25.546944 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e06364-1ec6-4ed6-b123-c52044bd3adb" containerName="extract" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.546951 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e06364-1ec6-4ed6-b123-c52044bd3adb" containerName="extract" Nov 24 18:01:25 crc kubenswrapper[4768]: E1124 18:01:25.546962 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e06364-1ec6-4ed6-b123-c52044bd3adb" containerName="util" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.546970 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e06364-1ec6-4ed6-b123-c52044bd3adb" containerName="util" Nov 24 18:01:25 crc kubenswrapper[4768]: E1124 18:01:25.546977 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920a0317-09dd-43e5-b5a9-11feb6d3b37d" containerName="console" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.546984 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="920a0317-09dd-43e5-b5a9-11feb6d3b37d" containerName="console" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.547114 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e06364-1ec6-4ed6-b123-c52044bd3adb" containerName="extract" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.547125 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="920a0317-09dd-43e5-b5a9-11feb6d3b37d" containerName="console" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.547586 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.549273 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.550002 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.551227 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.551338 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gzxpm" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.551978 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.565944 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q"] Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.632915 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59812c96-7130-431b-8e63-08a04a76a481-webhook-cert\") pod \"metallb-operator-controller-manager-65d776c5c5-mm52q\" (UID: \"59812c96-7130-431b-8e63-08a04a76a481\") " pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.633026 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmwms\" (UniqueName: \"kubernetes.io/projected/59812c96-7130-431b-8e63-08a04a76a481-kube-api-access-kmwms\") pod \"metallb-operator-controller-manager-65d776c5c5-mm52q\" (UID: \"59812c96-7130-431b-8e63-08a04a76a481\") " pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.633067 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59812c96-7130-431b-8e63-08a04a76a481-apiservice-cert\") pod \"metallb-operator-controller-manager-65d776c5c5-mm52q\" (UID: \"59812c96-7130-431b-8e63-08a04a76a481\") " pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.733755 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59812c96-7130-431b-8e63-08a04a76a481-webhook-cert\") pod \"metallb-operator-controller-manager-65d776c5c5-mm52q\" (UID: \"59812c96-7130-431b-8e63-08a04a76a481\") " pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.733858 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmwms\" (UniqueName: \"kubernetes.io/projected/59812c96-7130-431b-8e63-08a04a76a481-kube-api-access-kmwms\") pod \"metallb-operator-controller-manager-65d776c5c5-mm52q\" (UID: \"59812c96-7130-431b-8e63-08a04a76a481\") " pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.733887 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59812c96-7130-431b-8e63-08a04a76a481-apiservice-cert\") pod \"metallb-operator-controller-manager-65d776c5c5-mm52q\" (UID: \"59812c96-7130-431b-8e63-08a04a76a481\") " pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.742228 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59812c96-7130-431b-8e63-08a04a76a481-apiservice-cert\") pod \"metallb-operator-controller-manager-65d776c5c5-mm52q\" (UID: \"59812c96-7130-431b-8e63-08a04a76a481\") " pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.754148 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmwms\" (UniqueName: \"kubernetes.io/projected/59812c96-7130-431b-8e63-08a04a76a481-kube-api-access-kmwms\") pod \"metallb-operator-controller-manager-65d776c5c5-mm52q\" (UID: \"59812c96-7130-431b-8e63-08a04a76a481\") " pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.756245 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59812c96-7130-431b-8e63-08a04a76a481-webhook-cert\") pod \"metallb-operator-controller-manager-65d776c5c5-mm52q\" (UID: \"59812c96-7130-431b-8e63-08a04a76a481\") " pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.844897 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf"] Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.845588 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.853871 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.854078 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dv7pp" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.854620 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.863158 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf"] Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.863366 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.935952 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60867050-3f57-4b08-ace3-524c54adfeff-webhook-cert\") pod \"metallb-operator-webhook-server-ddc448d79-8bqsf\" (UID: \"60867050-3f57-4b08-ace3-524c54adfeff\") " pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.936041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tth7\" (UniqueName: \"kubernetes.io/projected/60867050-3f57-4b08-ace3-524c54adfeff-kube-api-access-7tth7\") pod \"metallb-operator-webhook-server-ddc448d79-8bqsf\" (UID: \"60867050-3f57-4b08-ace3-524c54adfeff\") " pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:25 crc kubenswrapper[4768]: I1124 18:01:25.936827 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60867050-3f57-4b08-ace3-524c54adfeff-apiservice-cert\") pod \"metallb-operator-webhook-server-ddc448d79-8bqsf\" (UID: \"60867050-3f57-4b08-ace3-524c54adfeff\") " pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:26 crc kubenswrapper[4768]: I1124 18:01:26.038258 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60867050-3f57-4b08-ace3-524c54adfeff-webhook-cert\") pod \"metallb-operator-webhook-server-ddc448d79-8bqsf\" (UID: \"60867050-3f57-4b08-ace3-524c54adfeff\") " pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:26 crc kubenswrapper[4768]: I1124 18:01:26.038338 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tth7\" (UniqueName: \"kubernetes.io/projected/60867050-3f57-4b08-ace3-524c54adfeff-kube-api-access-7tth7\") pod \"metallb-operator-webhook-server-ddc448d79-8bqsf\" (UID: \"60867050-3f57-4b08-ace3-524c54adfeff\") " pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:26 crc kubenswrapper[4768]: I1124 18:01:26.038379 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60867050-3f57-4b08-ace3-524c54adfeff-apiservice-cert\") pod \"metallb-operator-webhook-server-ddc448d79-8bqsf\" (UID: \"60867050-3f57-4b08-ace3-524c54adfeff\") " pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:26 crc kubenswrapper[4768]: I1124 18:01:26.042566 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60867050-3f57-4b08-ace3-524c54adfeff-apiservice-cert\") pod \"metallb-operator-webhook-server-ddc448d79-8bqsf\" (UID: \"60867050-3f57-4b08-ace3-524c54adfeff\") " pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:26 crc kubenswrapper[4768]: I1124 18:01:26.061989 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60867050-3f57-4b08-ace3-524c54adfeff-webhook-cert\") pod \"metallb-operator-webhook-server-ddc448d79-8bqsf\" (UID: \"60867050-3f57-4b08-ace3-524c54adfeff\") " pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:26 crc kubenswrapper[4768]: I1124 18:01:26.068362 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tth7\" (UniqueName: \"kubernetes.io/projected/60867050-3f57-4b08-ace3-524c54adfeff-kube-api-access-7tth7\") pod \"metallb-operator-webhook-server-ddc448d79-8bqsf\" (UID: \"60867050-3f57-4b08-ace3-524c54adfeff\") " pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:26 crc kubenswrapper[4768]: I1124 18:01:26.163094 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:26 crc kubenswrapper[4768]: I1124 18:01:26.247968 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q"] Nov 24 18:01:26 crc kubenswrapper[4768]: W1124 18:01:26.263302 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59812c96_7130_431b_8e63_08a04a76a481.slice/crio-2fa7ebf9b888bd5472c5acd09b4eb91909781fb877e9c179990068829a74f235 WatchSource:0}: Error finding container 2fa7ebf9b888bd5472c5acd09b4eb91909781fb877e9c179990068829a74f235: Status 404 returned error can't find the container with id 2fa7ebf9b888bd5472c5acd09b4eb91909781fb877e9c179990068829a74f235 Nov 24 18:01:26 crc kubenswrapper[4768]: I1124 18:01:26.302987 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" event={"ID":"59812c96-7130-431b-8e63-08a04a76a481","Type":"ContainerStarted","Data":"2fa7ebf9b888bd5472c5acd09b4eb91909781fb877e9c179990068829a74f235"} Nov 24 18:01:26 crc kubenswrapper[4768]: I1124 18:01:26.592632 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf"] Nov 24 18:01:26 crc kubenswrapper[4768]: W1124 18:01:26.600635 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60867050_3f57_4b08_ace3_524c54adfeff.slice/crio-94e8d7d580c54c5e2a4fb68ca58f97f424f8db0fa0c31abd5bc93b4d0a405a34 WatchSource:0}: Error finding container 94e8d7d580c54c5e2a4fb68ca58f97f424f8db0fa0c31abd5bc93b4d0a405a34: Status 404 returned error can't find the container with id 94e8d7d580c54c5e2a4fb68ca58f97f424f8db0fa0c31abd5bc93b4d0a405a34 Nov 24 18:01:27 crc kubenswrapper[4768]: I1124 18:01:27.309554 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" event={"ID":"60867050-3f57-4b08-ace3-524c54adfeff","Type":"ContainerStarted","Data":"94e8d7d580c54c5e2a4fb68ca58f97f424f8db0fa0c31abd5bc93b4d0a405a34"} Nov 24 18:01:30 crc kubenswrapper[4768]: I1124 18:01:30.327426 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" event={"ID":"59812c96-7130-431b-8e63-08a04a76a481","Type":"ContainerStarted","Data":"dfa450ff1832c853195baef01c365f295992333722e541a4850bf7f0a32b56ad"} Nov 24 18:01:30 crc kubenswrapper[4768]: I1124 18:01:30.327958 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:01:30 crc kubenswrapper[4768]: I1124 18:01:30.354084 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" podStartSLOduration=2.271411803 podStartE2EDuration="5.354058462s" podCreationTimestamp="2025-11-24 18:01:25 +0000 UTC" firstStartedPulling="2025-11-24 18:01:26.266944235 +0000 UTC m=+725.127526012" lastFinishedPulling="2025-11-24 18:01:29.349590894 +0000 UTC m=+728.210172671" observedRunningTime="2025-11-24 18:01:30.347804259 +0000 UTC m=+729.208386046" watchObservedRunningTime="2025-11-24 18:01:30.354058462 +0000 UTC m=+729.214640239" Nov 24 18:01:31 crc kubenswrapper[4768]: I1124 18:01:31.340183 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" event={"ID":"60867050-3f57-4b08-ace3-524c54adfeff","Type":"ContainerStarted","Data":"78499e7f787561ec2b179b408fc104275f619c3f3f307a534c7e339b5a118917"} Nov 24 18:01:31 crc kubenswrapper[4768]: I1124 18:01:31.340255 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:43 crc kubenswrapper[4768]: I1124 18:01:43.656693 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:01:43 crc kubenswrapper[4768]: I1124 18:01:43.657355 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:01:46 crc kubenswrapper[4768]: I1124 18:01:46.169061 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" Nov 24 18:01:46 crc kubenswrapper[4768]: I1124 18:01:46.190012 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-ddc448d79-8bqsf" podStartSLOduration=16.645040442 podStartE2EDuration="21.189998106s" podCreationTimestamp="2025-11-24 18:01:25 +0000 UTC" firstStartedPulling="2025-11-24 18:01:26.603234824 +0000 UTC m=+725.463816601" lastFinishedPulling="2025-11-24 18:01:31.148192488 +0000 UTC m=+730.008774265" observedRunningTime="2025-11-24 18:01:31.377784337 +0000 UTC m=+730.238366124" watchObservedRunningTime="2025-11-24 18:01:46.189998106 +0000 UTC m=+745.050579883" Nov 24 18:01:48 crc kubenswrapper[4768]: I1124 18:01:48.935246 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mwfrc"] Nov 24 18:01:48 crc kubenswrapper[4768]: I1124 18:01:48.936777 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" podUID="df08e410-ea02-4bf7-8330-d0530b2c08b5" containerName="controller-manager" containerID="cri-o://2a2f9b5d85566ca4d5574acebc98a68303bcdc7365c246cfe83463a383e3481a" gracePeriod=30 Nov 24 18:01:48 crc kubenswrapper[4768]: I1124 18:01:48.986409 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49"] Nov 24 18:01:48 crc kubenswrapper[4768]: I1124 18:01:48.986624 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" podUID="097861b9-f639-4e44-a54e-ae798f106ef0" containerName="route-controller-manager" containerID="cri-o://404f19099be914d24160ae3d8b19db043425475ed03848d06c7b7e2ac7af5077" gracePeriod=30 Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.424745 4768 generic.go:334] "Generic (PLEG): container finished" podID="df08e410-ea02-4bf7-8330-d0530b2c08b5" containerID="2a2f9b5d85566ca4d5574acebc98a68303bcdc7365c246cfe83463a383e3481a" exitCode=0 Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.424814 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" event={"ID":"df08e410-ea02-4bf7-8330-d0530b2c08b5","Type":"ContainerDied","Data":"2a2f9b5d85566ca4d5574acebc98a68303bcdc7365c246cfe83463a383e3481a"} Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.427109 4768 generic.go:334] "Generic (PLEG): container finished" podID="097861b9-f639-4e44-a54e-ae798f106ef0" containerID="404f19099be914d24160ae3d8b19db043425475ed03848d06c7b7e2ac7af5077" exitCode=0 Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.427203 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" event={"ID":"097861b9-f639-4e44-a54e-ae798f106ef0","Type":"ContainerDied","Data":"404f19099be914d24160ae3d8b19db043425475ed03848d06c7b7e2ac7af5077"} Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.786442 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.858547 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.878823 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df08e410-ea02-4bf7-8330-d0530b2c08b5-serving-cert\") pod \"df08e410-ea02-4bf7-8330-d0530b2c08b5\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.878874 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-proxy-ca-bundles\") pod \"df08e410-ea02-4bf7-8330-d0530b2c08b5\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.878909 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-config\") pod \"df08e410-ea02-4bf7-8330-d0530b2c08b5\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.878954 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sncjx\" (UniqueName: \"kubernetes.io/projected/df08e410-ea02-4bf7-8330-d0530b2c08b5-kube-api-access-sncjx\") pod \"df08e410-ea02-4bf7-8330-d0530b2c08b5\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.878973 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-client-ca\") pod \"df08e410-ea02-4bf7-8330-d0530b2c08b5\" (UID: \"df08e410-ea02-4bf7-8330-d0530b2c08b5\") " Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.879706 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-client-ca" (OuterVolumeSpecName: "client-ca") pod "df08e410-ea02-4bf7-8330-d0530b2c08b5" (UID: "df08e410-ea02-4bf7-8330-d0530b2c08b5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.879955 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "df08e410-ea02-4bf7-8330-d0530b2c08b5" (UID: "df08e410-ea02-4bf7-8330-d0530b2c08b5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.880457 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-config" (OuterVolumeSpecName: "config") pod "df08e410-ea02-4bf7-8330-d0530b2c08b5" (UID: "df08e410-ea02-4bf7-8330-d0530b2c08b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.885668 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df08e410-ea02-4bf7-8330-d0530b2c08b5-kube-api-access-sncjx" (OuterVolumeSpecName: "kube-api-access-sncjx") pod "df08e410-ea02-4bf7-8330-d0530b2c08b5" (UID: "df08e410-ea02-4bf7-8330-d0530b2c08b5"). InnerVolumeSpecName "kube-api-access-sncjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.885910 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df08e410-ea02-4bf7-8330-d0530b2c08b5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df08e410-ea02-4bf7-8330-d0530b2c08b5" (UID: "df08e410-ea02-4bf7-8330-d0530b2c08b5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.980841 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/097861b9-f639-4e44-a54e-ae798f106ef0-client-ca\") pod \"097861b9-f639-4e44-a54e-ae798f106ef0\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.981063 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097861b9-f639-4e44-a54e-ae798f106ef0-config\") pod \"097861b9-f639-4e44-a54e-ae798f106ef0\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.981232 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fs2n\" (UniqueName: \"kubernetes.io/projected/097861b9-f639-4e44-a54e-ae798f106ef0-kube-api-access-8fs2n\") pod \"097861b9-f639-4e44-a54e-ae798f106ef0\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.981381 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/097861b9-f639-4e44-a54e-ae798f106ef0-serving-cert\") pod \"097861b9-f639-4e44-a54e-ae798f106ef0\" (UID: \"097861b9-f639-4e44-a54e-ae798f106ef0\") " Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.981725 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097861b9-f639-4e44-a54e-ae798f106ef0-client-ca" (OuterVolumeSpecName: "client-ca") pod "097861b9-f639-4e44-a54e-ae798f106ef0" (UID: "097861b9-f639-4e44-a54e-ae798f106ef0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.982146 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097861b9-f639-4e44-a54e-ae798f106ef0-config" (OuterVolumeSpecName: "config") pod "097861b9-f639-4e44-a54e-ae798f106ef0" (UID: "097861b9-f639-4e44-a54e-ae798f106ef0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.982397 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sncjx\" (UniqueName: \"kubernetes.io/projected/df08e410-ea02-4bf7-8330-d0530b2c08b5-kube-api-access-sncjx\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.982453 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.982479 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/097861b9-f639-4e44-a54e-ae798f106ef0-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.982510 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/097861b9-f639-4e44-a54e-ae798f106ef0-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.982522 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df08e410-ea02-4bf7-8330-d0530b2c08b5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.982534 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.982551 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df08e410-ea02-4bf7-8330-d0530b2c08b5-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.984406 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097861b9-f639-4e44-a54e-ae798f106ef0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "097861b9-f639-4e44-a54e-ae798f106ef0" (UID: "097861b9-f639-4e44-a54e-ae798f106ef0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:01:49 crc kubenswrapper[4768]: I1124 18:01:49.984563 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/097861b9-f639-4e44-a54e-ae798f106ef0-kube-api-access-8fs2n" (OuterVolumeSpecName: "kube-api-access-8fs2n") pod "097861b9-f639-4e44-a54e-ae798f106ef0" (UID: "097861b9-f639-4e44-a54e-ae798f106ef0"). InnerVolumeSpecName "kube-api-access-8fs2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.083285 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fs2n\" (UniqueName: \"kubernetes.io/projected/097861b9-f639-4e44-a54e-ae798f106ef0-kube-api-access-8fs2n\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.083315 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/097861b9-f639-4e44-a54e-ae798f106ef0-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.433776 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.433783 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49" event={"ID":"097861b9-f639-4e44-a54e-ae798f106ef0","Type":"ContainerDied","Data":"b622ee901b2488c793480709d0545aeb3acc5fb5fe8f8574e752338d3a6c50e2"} Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.434191 4768 scope.go:117] "RemoveContainer" containerID="404f19099be914d24160ae3d8b19db043425475ed03848d06c7b7e2ac7af5077" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.435403 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" event={"ID":"df08e410-ea02-4bf7-8330-d0530b2c08b5","Type":"ContainerDied","Data":"4619ce1363586919481cc3d54159b704e17286c31c7b2626e95b51ca9959a3fe"} Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.435473 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mwfrc" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.454659 4768 scope.go:117] "RemoveContainer" containerID="2a2f9b5d85566ca4d5574acebc98a68303bcdc7365c246cfe83463a383e3481a" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.471062 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mwfrc"] Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.474148 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mwfrc"] Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.483727 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49"] Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.487772 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4n49"] Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.673003 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m"] Nov 24 18:01:50 crc kubenswrapper[4768]: E1124 18:01:50.673834 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df08e410-ea02-4bf7-8330-d0530b2c08b5" containerName="controller-manager" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.674048 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="df08e410-ea02-4bf7-8330-d0530b2c08b5" containerName="controller-manager" Nov 24 18:01:50 crc kubenswrapper[4768]: E1124 18:01:50.674204 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097861b9-f639-4e44-a54e-ae798f106ef0" containerName="route-controller-manager" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.674353 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="097861b9-f639-4e44-a54e-ae798f106ef0" containerName="route-controller-manager" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.674749 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="df08e410-ea02-4bf7-8330-d0530b2c08b5" containerName="controller-manager" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.674933 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="097861b9-f639-4e44-a54e-ae798f106ef0" containerName="route-controller-manager" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.675836 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.678189 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f78486575-4q2ft"] Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.678421 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.678901 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.679111 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.679304 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.679566 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.679964 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.680761 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.683771 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.683968 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m"] Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.684034 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.684466 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.684739 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.684976 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.685256 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.690312 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.699248 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f78486575-4q2ft"] Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.792954 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-serving-cert\") pod \"route-controller-manager-69484f5475-lht2m\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.793011 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-client-ca\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.793040 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1249121e-b23a-4827-92d9-a45b452c8e08-serving-cert\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.793236 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-config\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.793307 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5dtc\" (UniqueName: \"kubernetes.io/projected/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-kube-api-access-p5dtc\") pod \"route-controller-manager-69484f5475-lht2m\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.793363 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-proxy-ca-bundles\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.793532 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-config\") pod \"route-controller-manager-69484f5475-lht2m\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.793584 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp6fn\" (UniqueName: \"kubernetes.io/projected/1249121e-b23a-4827-92d9-a45b452c8e08-kube-api-access-sp6fn\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.793716 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-client-ca\") pod \"route-controller-manager-69484f5475-lht2m\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.895437 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5dtc\" (UniqueName: \"kubernetes.io/projected/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-kube-api-access-p5dtc\") pod \"route-controller-manager-69484f5475-lht2m\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.895519 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-proxy-ca-bundles\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.895574 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-config\") pod \"route-controller-manager-69484f5475-lht2m\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.895599 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp6fn\" (UniqueName: \"kubernetes.io/projected/1249121e-b23a-4827-92d9-a45b452c8e08-kube-api-access-sp6fn\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.895634 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-client-ca\") pod \"route-controller-manager-69484f5475-lht2m\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.895661 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-serving-cert\") pod \"route-controller-manager-69484f5475-lht2m\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.895694 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-client-ca\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.895717 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1249121e-b23a-4827-92d9-a45b452c8e08-serving-cert\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.895758 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-config\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.896967 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-proxy-ca-bundles\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.897639 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-config\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.898420 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-client-ca\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.898790 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-config\") pod \"route-controller-manager-69484f5475-lht2m\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.899133 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-client-ca\") pod \"route-controller-manager-69484f5475-lht2m\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.905724 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-serving-cert\") pod \"route-controller-manager-69484f5475-lht2m\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.906512 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1249121e-b23a-4827-92d9-a45b452c8e08-serving-cert\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.912552 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f78486575-4q2ft"] Nov 24 18:01:50 crc kubenswrapper[4768]: E1124 18:01:50.912963 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-sp6fn], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" podUID="1249121e-b23a-4827-92d9-a45b452c8e08" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.922598 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5dtc\" (UniqueName: \"kubernetes.io/projected/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-kube-api-access-p5dtc\") pod \"route-controller-manager-69484f5475-lht2m\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.931301 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp6fn\" (UniqueName: \"kubernetes.io/projected/1249121e-b23a-4827-92d9-a45b452c8e08-kube-api-access-sp6fn\") pod \"controller-manager-7f78486575-4q2ft\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.936574 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m"] Nov 24 18:01:50 crc kubenswrapper[4768]: I1124 18:01:50.937294 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.442549 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.453357 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.477609 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m"] Nov 24 18:01:51 crc kubenswrapper[4768]: W1124 18:01:51.482277 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda30c3d5d_8212_43e0_b3eb_4f6df6e769a1.slice/crio-016284d21620d18687ea868f2e7ca3293e4776b3b2c309193215b4a3223454cd WatchSource:0}: Error finding container 016284d21620d18687ea868f2e7ca3293e4776b3b2c309193215b4a3223454cd: Status 404 returned error can't find the container with id 016284d21620d18687ea868f2e7ca3293e4776b3b2c309193215b4a3223454cd Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.503242 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-proxy-ca-bundles\") pod \"1249121e-b23a-4827-92d9-a45b452c8e08\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.503320 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp6fn\" (UniqueName: \"kubernetes.io/projected/1249121e-b23a-4827-92d9-a45b452c8e08-kube-api-access-sp6fn\") pod \"1249121e-b23a-4827-92d9-a45b452c8e08\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.503353 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1249121e-b23a-4827-92d9-a45b452c8e08-serving-cert\") pod \"1249121e-b23a-4827-92d9-a45b452c8e08\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.503420 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-client-ca\") pod \"1249121e-b23a-4827-92d9-a45b452c8e08\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.503520 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-config\") pod \"1249121e-b23a-4827-92d9-a45b452c8e08\" (UID: \"1249121e-b23a-4827-92d9-a45b452c8e08\") " Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.504025 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1249121e-b23a-4827-92d9-a45b452c8e08" (UID: "1249121e-b23a-4827-92d9-a45b452c8e08"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.504380 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-client-ca" (OuterVolumeSpecName: "client-ca") pod "1249121e-b23a-4827-92d9-a45b452c8e08" (UID: "1249121e-b23a-4827-92d9-a45b452c8e08"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.505264 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-config" (OuterVolumeSpecName: "config") pod "1249121e-b23a-4827-92d9-a45b452c8e08" (UID: "1249121e-b23a-4827-92d9-a45b452c8e08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.506872 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1249121e-b23a-4827-92d9-a45b452c8e08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1249121e-b23a-4827-92d9-a45b452c8e08" (UID: "1249121e-b23a-4827-92d9-a45b452c8e08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.508085 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1249121e-b23a-4827-92d9-a45b452c8e08-kube-api-access-sp6fn" (OuterVolumeSpecName: "kube-api-access-sp6fn") pod "1249121e-b23a-4827-92d9-a45b452c8e08" (UID: "1249121e-b23a-4827-92d9-a45b452c8e08"). InnerVolumeSpecName "kube-api-access-sp6fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.604888 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.604951 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp6fn\" (UniqueName: \"kubernetes.io/projected/1249121e-b23a-4827-92d9-a45b452c8e08-kube-api-access-sp6fn\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.604967 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1249121e-b23a-4827-92d9-a45b452c8e08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.604994 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.605015 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1249121e-b23a-4827-92d9-a45b452c8e08-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.905374 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097861b9-f639-4e44-a54e-ae798f106ef0" path="/var/lib/kubelet/pods/097861b9-f639-4e44-a54e-ae798f106ef0/volumes" Nov 24 18:01:51 crc kubenswrapper[4768]: I1124 18:01:51.906374 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df08e410-ea02-4bf7-8330-d0530b2c08b5" path="/var/lib/kubelet/pods/df08e410-ea02-4bf7-8330-d0530b2c08b5/volumes" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.454118 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f78486575-4q2ft" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.454198 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" podUID="a30c3d5d-8212-43e0-b3eb-4f6df6e769a1" containerName="route-controller-manager" containerID="cri-o://d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942" gracePeriod=30 Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.454719 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" event={"ID":"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1","Type":"ContainerStarted","Data":"d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942"} Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.454748 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" event={"ID":"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1","Type":"ContainerStarted","Data":"016284d21620d18687ea868f2e7ca3293e4776b3b2c309193215b4a3223454cd"} Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.454763 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.460458 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.477642 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" podStartSLOduration=3.477621961 podStartE2EDuration="3.477621961s" podCreationTimestamp="2025-11-24 18:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:01:52.475901353 +0000 UTC m=+751.336483130" watchObservedRunningTime="2025-11-24 18:01:52.477621961 +0000 UTC m=+751.338203738" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.515551 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f78486575-4q2ft"] Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.522673 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s"] Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.523470 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.527597 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f78486575-4q2ft"] Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.528627 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.528911 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.529048 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.529189 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.529318 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.529677 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.535115 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s"] Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.536073 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.621377 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69302a1d-718a-4033-89ed-38df69ae7ec5-client-ca\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.621432 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hqkj\" (UniqueName: \"kubernetes.io/projected/69302a1d-718a-4033-89ed-38df69ae7ec5-kube-api-access-2hqkj\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.624550 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69302a1d-718a-4033-89ed-38df69ae7ec5-serving-cert\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.624669 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69302a1d-718a-4033-89ed-38df69ae7ec5-config\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.624725 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69302a1d-718a-4033-89ed-38df69ae7ec5-proxy-ca-bundles\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.726178 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69302a1d-718a-4033-89ed-38df69ae7ec5-serving-cert\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.726246 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69302a1d-718a-4033-89ed-38df69ae7ec5-config\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.726277 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69302a1d-718a-4033-89ed-38df69ae7ec5-proxy-ca-bundles\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.726308 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69302a1d-718a-4033-89ed-38df69ae7ec5-client-ca\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.726325 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hqkj\" (UniqueName: \"kubernetes.io/projected/69302a1d-718a-4033-89ed-38df69ae7ec5-kube-api-access-2hqkj\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.727553 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69302a1d-718a-4033-89ed-38df69ae7ec5-client-ca\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.727671 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69302a1d-718a-4033-89ed-38df69ae7ec5-proxy-ca-bundles\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.729645 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69302a1d-718a-4033-89ed-38df69ae7ec5-config\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.738622 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69302a1d-718a-4033-89ed-38df69ae7ec5-serving-cert\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.743407 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hqkj\" (UniqueName: \"kubernetes.io/projected/69302a1d-718a-4033-89ed-38df69ae7ec5-kube-api-access-2hqkj\") pod \"controller-manager-5dcd99bb4d-nlc7s\" (UID: \"69302a1d-718a-4033-89ed-38df69ae7ec5\") " pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.858316 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.885326 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.929014 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-client-ca\") pod \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.929111 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-serving-cert\") pod \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.929131 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-config\") pod \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.929301 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5dtc\" (UniqueName: \"kubernetes.io/projected/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-kube-api-access-p5dtc\") pod \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\" (UID: \"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1\") " Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.930033 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-client-ca" (OuterVolumeSpecName: "client-ca") pod "a30c3d5d-8212-43e0-b3eb-4f6df6e769a1" (UID: "a30c3d5d-8212-43e0-b3eb-4f6df6e769a1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.930178 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-config" (OuterVolumeSpecName: "config") pod "a30c3d5d-8212-43e0-b3eb-4f6df6e769a1" (UID: "a30c3d5d-8212-43e0-b3eb-4f6df6e769a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.932708 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-kube-api-access-p5dtc" (OuterVolumeSpecName: "kube-api-access-p5dtc") pod "a30c3d5d-8212-43e0-b3eb-4f6df6e769a1" (UID: "a30c3d5d-8212-43e0-b3eb-4f6df6e769a1"). InnerVolumeSpecName "kube-api-access-p5dtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:01:52 crc kubenswrapper[4768]: I1124 18:01:52.933143 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a30c3d5d-8212-43e0-b3eb-4f6df6e769a1" (UID: "a30c3d5d-8212-43e0-b3eb-4f6df6e769a1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.031598 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5dtc\" (UniqueName: \"kubernetes.io/projected/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-kube-api-access-p5dtc\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.031633 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.031642 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.031652 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.091572 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s"] Nov 24 18:01:53 crc kubenswrapper[4768]: W1124 18:01:53.102812 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69302a1d_718a_4033_89ed_38df69ae7ec5.slice/crio-80149841bceccd52ef1bf71ec66a59d0e7a766f154d97a75a5bd10f7ac53e656 WatchSource:0}: Error finding container 80149841bceccd52ef1bf71ec66a59d0e7a766f154d97a75a5bd10f7ac53e656: Status 404 returned error can't find the container with id 80149841bceccd52ef1bf71ec66a59d0e7a766f154d97a75a5bd10f7ac53e656 Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.461092 4768 generic.go:334] "Generic (PLEG): container finished" podID="a30c3d5d-8212-43e0-b3eb-4f6df6e769a1" containerID="d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942" exitCode=0 Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.461132 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" event={"ID":"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1","Type":"ContainerDied","Data":"d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942"} Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.461408 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" event={"ID":"a30c3d5d-8212-43e0-b3eb-4f6df6e769a1","Type":"ContainerDied","Data":"016284d21620d18687ea868f2e7ca3293e4776b3b2c309193215b4a3223454cd"} Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.461151 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.461426 4768 scope.go:117] "RemoveContainer" containerID="d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.463309 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" event={"ID":"69302a1d-718a-4033-89ed-38df69ae7ec5","Type":"ContainerStarted","Data":"2dd2ba8f08fd9bf8b22e29ee6eca5380712a9ec6b8a9756eacce66f74ca93847"} Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.463337 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" event={"ID":"69302a1d-718a-4033-89ed-38df69ae7ec5","Type":"ContainerStarted","Data":"80149841bceccd52ef1bf71ec66a59d0e7a766f154d97a75a5bd10f7ac53e656"} Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.463522 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.474799 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.478766 4768 scope.go:117] "RemoveContainer" containerID="d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942" Nov 24 18:01:53 crc kubenswrapper[4768]: E1124 18:01:53.479263 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942\": container with ID starting with d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942 not found: ID does not exist" containerID="d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.479300 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942"} err="failed to get container status \"d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942\": rpc error: code = NotFound desc = could not find container \"d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942\": container with ID starting with d55f7d67b46e8a5a868a11bfc64e0298202c4a00277864054581560215ec8942 not found: ID does not exist" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.485415 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5dcd99bb4d-nlc7s" podStartSLOduration=3.485398114 podStartE2EDuration="3.485398114s" podCreationTimestamp="2025-11-24 18:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:01:53.48475521 +0000 UTC m=+752.345336997" watchObservedRunningTime="2025-11-24 18:01:53.485398114 +0000 UTC m=+752.345979911" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.501723 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m"] Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.508648 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69484f5475-lht2m"] Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.904930 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1249121e-b23a-4827-92d9-a45b452c8e08" path="/var/lib/kubelet/pods/1249121e-b23a-4827-92d9-a45b452c8e08/volumes" Nov 24 18:01:53 crc kubenswrapper[4768]: I1124 18:01:53.905350 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a30c3d5d-8212-43e0-b3eb-4f6df6e769a1" path="/var/lib/kubelet/pods/a30c3d5d-8212-43e0-b3eb-4f6df6e769a1/volumes" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.671943 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg"] Nov 24 18:01:54 crc kubenswrapper[4768]: E1124 18:01:54.672403 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30c3d5d-8212-43e0-b3eb-4f6df6e769a1" containerName="route-controller-manager" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.672414 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30c3d5d-8212-43e0-b3eb-4f6df6e769a1" containerName="route-controller-manager" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.672530 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a30c3d5d-8212-43e0-b3eb-4f6df6e769a1" containerName="route-controller-manager" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.672887 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.675463 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.675537 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.676596 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.676740 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.677242 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.677581 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.689890 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg"] Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.753292 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f67da56-ed80-4c1d-b2a2-d2f15bcda6db-serving-cert\") pod \"route-controller-manager-7879777b47-97wlg\" (UID: \"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db\") " pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.753365 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8r65\" (UniqueName: \"kubernetes.io/projected/2f67da56-ed80-4c1d-b2a2-d2f15bcda6db-kube-api-access-r8r65\") pod \"route-controller-manager-7879777b47-97wlg\" (UID: \"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db\") " pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.753391 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f67da56-ed80-4c1d-b2a2-d2f15bcda6db-config\") pod \"route-controller-manager-7879777b47-97wlg\" (UID: \"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db\") " pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.753512 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f67da56-ed80-4c1d-b2a2-d2f15bcda6db-client-ca\") pod \"route-controller-manager-7879777b47-97wlg\" (UID: \"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db\") " pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.854376 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f67da56-ed80-4c1d-b2a2-d2f15bcda6db-client-ca\") pod \"route-controller-manager-7879777b47-97wlg\" (UID: \"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db\") " pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.854453 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f67da56-ed80-4c1d-b2a2-d2f15bcda6db-serving-cert\") pod \"route-controller-manager-7879777b47-97wlg\" (UID: \"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db\") " pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.854475 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8r65\" (UniqueName: \"kubernetes.io/projected/2f67da56-ed80-4c1d-b2a2-d2f15bcda6db-kube-api-access-r8r65\") pod \"route-controller-manager-7879777b47-97wlg\" (UID: \"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db\") " pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.854526 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f67da56-ed80-4c1d-b2a2-d2f15bcda6db-config\") pod \"route-controller-manager-7879777b47-97wlg\" (UID: \"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db\") " pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.855762 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f67da56-ed80-4c1d-b2a2-d2f15bcda6db-client-ca\") pod \"route-controller-manager-7879777b47-97wlg\" (UID: \"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db\") " pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.855795 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f67da56-ed80-4c1d-b2a2-d2f15bcda6db-config\") pod \"route-controller-manager-7879777b47-97wlg\" (UID: \"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db\") " pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.860675 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f67da56-ed80-4c1d-b2a2-d2f15bcda6db-serving-cert\") pod \"route-controller-manager-7879777b47-97wlg\" (UID: \"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db\") " pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.869375 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8r65\" (UniqueName: \"kubernetes.io/projected/2f67da56-ed80-4c1d-b2a2-d2f15bcda6db-kube-api-access-r8r65\") pod \"route-controller-manager-7879777b47-97wlg\" (UID: \"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db\") " pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:54 crc kubenswrapper[4768]: I1124 18:01:54.987821 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:55 crc kubenswrapper[4768]: I1124 18:01:55.077450 4768 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 18:01:55 crc kubenswrapper[4768]: I1124 18:01:55.538932 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg"] Nov 24 18:01:55 crc kubenswrapper[4768]: W1124 18:01:55.553665 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f67da56_ed80_4c1d_b2a2_d2f15bcda6db.slice/crio-ad04833f04db9db9ea82f83aa15f58d3e71037a28ca7abb4e3b562e89c402465 WatchSource:0}: Error finding container ad04833f04db9db9ea82f83aa15f58d3e71037a28ca7abb4e3b562e89c402465: Status 404 returned error can't find the container with id ad04833f04db9db9ea82f83aa15f58d3e71037a28ca7abb4e3b562e89c402465 Nov 24 18:01:56 crc kubenswrapper[4768]: I1124 18:01:56.509643 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" event={"ID":"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db","Type":"ContainerStarted","Data":"da8964848a2446448d1ab3ef1008290706610919b05c9e1e485bd64e0c8f65f2"} Nov 24 18:01:56 crc kubenswrapper[4768]: I1124 18:01:56.510386 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" event={"ID":"2f67da56-ed80-4c1d-b2a2-d2f15bcda6db","Type":"ContainerStarted","Data":"ad04833f04db9db9ea82f83aa15f58d3e71037a28ca7abb4e3b562e89c402465"} Nov 24 18:01:56 crc kubenswrapper[4768]: I1124 18:01:56.510415 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:56 crc kubenswrapper[4768]: I1124 18:01:56.520032 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" Nov 24 18:01:56 crc kubenswrapper[4768]: I1124 18:01:56.538359 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7879777b47-97wlg" podStartSLOduration=6.538331187 podStartE2EDuration="6.538331187s" podCreationTimestamp="2025-11-24 18:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:01:56.536973056 +0000 UTC m=+755.397554833" watchObservedRunningTime="2025-11-24 18:01:56.538331187 +0000 UTC m=+755.398912964" Nov 24 18:02:05 crc kubenswrapper[4768]: I1124 18:02:05.867734 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-65d776c5c5-mm52q" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.626647 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-br7zz"] Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.629456 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.632085 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.632134 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.632186 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-s9f5n" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.633791 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2"] Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.634959 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.636214 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.656278 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2"] Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.708573 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk6bk\" (UniqueName: \"kubernetes.io/projected/d52b407a-4b4f-47ce-9cc4-244b3fca2db4-kube-api-access-vk6bk\") pod \"frr-k8s-webhook-server-6998585d5-bmlh2\" (UID: \"d52b407a-4b4f-47ce-9cc4-244b3fca2db4\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.708630 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cbca4cc0-b37d-4521-8c37-706beb2a4030-frr-startup\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.708691 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6wr5\" (UniqueName: \"kubernetes.io/projected/cbca4cc0-b37d-4521-8c37-706beb2a4030-kube-api-access-q6wr5\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.708712 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cbca4cc0-b37d-4521-8c37-706beb2a4030-frr-sockets\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.708731 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cbca4cc0-b37d-4521-8c37-706beb2a4030-frr-conf\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.708852 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbca4cc0-b37d-4521-8c37-706beb2a4030-metrics-certs\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.708910 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cbca4cc0-b37d-4521-8c37-706beb2a4030-metrics\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.708970 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d52b407a-4b4f-47ce-9cc4-244b3fca2db4-cert\") pod \"frr-k8s-webhook-server-6998585d5-bmlh2\" (UID: \"d52b407a-4b4f-47ce-9cc4-244b3fca2db4\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.709071 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cbca4cc0-b37d-4521-8c37-706beb2a4030-reloader\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.725440 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-xj9kr"] Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.726662 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xj9kr" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.731370 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.731386 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.731596 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-5bpq7" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.732822 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.740117 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-8wcfs"] Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.741179 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.743961 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.753823 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-8wcfs"] Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.809968 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-memberlist\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810037 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cbca4cc0-b37d-4521-8c37-706beb2a4030-reloader\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810069 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxzvl\" (UniqueName: \"kubernetes.io/projected/d1e6e133-4775-411b-b0e1-516e2cd2e276-kube-api-access-mxzvl\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810096 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d270c276-5cc7-40cb-a690-27a3e3b5d29a-metrics-certs\") pod \"controller-6c7b4b5f48-8wcfs\" (UID: \"d270c276-5cc7-40cb-a690-27a3e3b5d29a\") " pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810122 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d1e6e133-4775-411b-b0e1-516e2cd2e276-metallb-excludel2\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810143 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk6bk\" (UniqueName: \"kubernetes.io/projected/d52b407a-4b4f-47ce-9cc4-244b3fca2db4-kube-api-access-vk6bk\") pod \"frr-k8s-webhook-server-6998585d5-bmlh2\" (UID: \"d52b407a-4b4f-47ce-9cc4-244b3fca2db4\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810167 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cbca4cc0-b37d-4521-8c37-706beb2a4030-frr-startup\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810184 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-metrics-certs\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810215 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk6dj\" (UniqueName: \"kubernetes.io/projected/d270c276-5cc7-40cb-a690-27a3e3b5d29a-kube-api-access-jk6dj\") pod \"controller-6c7b4b5f48-8wcfs\" (UID: \"d270c276-5cc7-40cb-a690-27a3e3b5d29a\") " pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810237 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6wr5\" (UniqueName: \"kubernetes.io/projected/cbca4cc0-b37d-4521-8c37-706beb2a4030-kube-api-access-q6wr5\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810255 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cbca4cc0-b37d-4521-8c37-706beb2a4030-frr-sockets\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810272 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d270c276-5cc7-40cb-a690-27a3e3b5d29a-cert\") pod \"controller-6c7b4b5f48-8wcfs\" (UID: \"d270c276-5cc7-40cb-a690-27a3e3b5d29a\") " pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810285 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cbca4cc0-b37d-4521-8c37-706beb2a4030-frr-conf\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810301 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbca4cc0-b37d-4521-8c37-706beb2a4030-metrics-certs\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810322 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cbca4cc0-b37d-4521-8c37-706beb2a4030-metrics\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810340 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d52b407a-4b4f-47ce-9cc4-244b3fca2db4-cert\") pod \"frr-k8s-webhook-server-6998585d5-bmlh2\" (UID: \"d52b407a-4b4f-47ce-9cc4-244b3fca2db4\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.810657 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cbca4cc0-b37d-4521-8c37-706beb2a4030-reloader\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.811184 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cbca4cc0-b37d-4521-8c37-706beb2a4030-frr-sockets\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.811253 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cbca4cc0-b37d-4521-8c37-706beb2a4030-metrics\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.811296 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cbca4cc0-b37d-4521-8c37-706beb2a4030-frr-conf\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.811664 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cbca4cc0-b37d-4521-8c37-706beb2a4030-frr-startup\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.816938 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d52b407a-4b4f-47ce-9cc4-244b3fca2db4-cert\") pod \"frr-k8s-webhook-server-6998585d5-bmlh2\" (UID: \"d52b407a-4b4f-47ce-9cc4-244b3fca2db4\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.817644 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbca4cc0-b37d-4521-8c37-706beb2a4030-metrics-certs\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.829166 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk6bk\" (UniqueName: \"kubernetes.io/projected/d52b407a-4b4f-47ce-9cc4-244b3fca2db4-kube-api-access-vk6bk\") pod \"frr-k8s-webhook-server-6998585d5-bmlh2\" (UID: \"d52b407a-4b4f-47ce-9cc4-244b3fca2db4\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.829342 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6wr5\" (UniqueName: \"kubernetes.io/projected/cbca4cc0-b37d-4521-8c37-706beb2a4030-kube-api-access-q6wr5\") pod \"frr-k8s-br7zz\" (UID: \"cbca4cc0-b37d-4521-8c37-706beb2a4030\") " pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.911863 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-memberlist\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.911909 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxzvl\" (UniqueName: \"kubernetes.io/projected/d1e6e133-4775-411b-b0e1-516e2cd2e276-kube-api-access-mxzvl\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.911931 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d270c276-5cc7-40cb-a690-27a3e3b5d29a-metrics-certs\") pod \"controller-6c7b4b5f48-8wcfs\" (UID: \"d270c276-5cc7-40cb-a690-27a3e3b5d29a\") " pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.911953 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d1e6e133-4775-411b-b0e1-516e2cd2e276-metallb-excludel2\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.911978 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-metrics-certs\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.912009 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk6dj\" (UniqueName: \"kubernetes.io/projected/d270c276-5cc7-40cb-a690-27a3e3b5d29a-kube-api-access-jk6dj\") pod \"controller-6c7b4b5f48-8wcfs\" (UID: \"d270c276-5cc7-40cb-a690-27a3e3b5d29a\") " pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.912036 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d270c276-5cc7-40cb-a690-27a3e3b5d29a-cert\") pod \"controller-6c7b4b5f48-8wcfs\" (UID: \"d270c276-5cc7-40cb-a690-27a3e3b5d29a\") " pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:06 crc kubenswrapper[4768]: E1124 18:02:06.912296 4768 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 18:02:06 crc kubenswrapper[4768]: E1124 18:02:06.912385 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-memberlist podName:d1e6e133-4775-411b-b0e1-516e2cd2e276 nodeName:}" failed. No retries permitted until 2025-11-24 18:02:07.412367841 +0000 UTC m=+766.272949618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-memberlist") pod "speaker-xj9kr" (UID: "d1e6e133-4775-411b-b0e1-516e2cd2e276") : secret "metallb-memberlist" not found Nov 24 18:02:06 crc kubenswrapper[4768]: E1124 18:02:06.912296 4768 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 24 18:02:06 crc kubenswrapper[4768]: E1124 18:02:06.912418 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-metrics-certs podName:d1e6e133-4775-411b-b0e1-516e2cd2e276 nodeName:}" failed. No retries permitted until 2025-11-24 18:02:07.412412812 +0000 UTC m=+766.272994589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-metrics-certs") pod "speaker-xj9kr" (UID: "d1e6e133-4775-411b-b0e1-516e2cd2e276") : secret "speaker-certs-secret" not found Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.912767 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d1e6e133-4775-411b-b0e1-516e2cd2e276-metallb-excludel2\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.920132 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d270c276-5cc7-40cb-a690-27a3e3b5d29a-cert\") pod \"controller-6c7b4b5f48-8wcfs\" (UID: \"d270c276-5cc7-40cb-a690-27a3e3b5d29a\") " pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.920291 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d270c276-5cc7-40cb-a690-27a3e3b5d29a-metrics-certs\") pod \"controller-6c7b4b5f48-8wcfs\" (UID: \"d270c276-5cc7-40cb-a690-27a3e3b5d29a\") " pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.930289 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxzvl\" (UniqueName: \"kubernetes.io/projected/d1e6e133-4775-411b-b0e1-516e2cd2e276-kube-api-access-mxzvl\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.932987 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk6dj\" (UniqueName: \"kubernetes.io/projected/d270c276-5cc7-40cb-a690-27a3e3b5d29a-kube-api-access-jk6dj\") pod \"controller-6c7b4b5f48-8wcfs\" (UID: \"d270c276-5cc7-40cb-a690-27a3e3b5d29a\") " pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.954321 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:06 crc kubenswrapper[4768]: I1124 18:02:06.965527 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" Nov 24 18:02:07 crc kubenswrapper[4768]: I1124 18:02:07.053985 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:07 crc kubenswrapper[4768]: I1124 18:02:07.360143 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2"] Nov 24 18:02:07 crc kubenswrapper[4768]: W1124 18:02:07.366512 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd52b407a_4b4f_47ce_9cc4_244b3fca2db4.slice/crio-0852e866d1341539a5f219a93b51604146c52fa921d5b52d5d50decf8ab76b38 WatchSource:0}: Error finding container 0852e866d1341539a5f219a93b51604146c52fa921d5b52d5d50decf8ab76b38: Status 404 returned error can't find the container with id 0852e866d1341539a5f219a93b51604146c52fa921d5b52d5d50decf8ab76b38 Nov 24 18:02:07 crc kubenswrapper[4768]: I1124 18:02:07.418239 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-metrics-certs\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:07 crc kubenswrapper[4768]: I1124 18:02:07.418335 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-memberlist\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:07 crc kubenswrapper[4768]: E1124 18:02:07.418466 4768 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 18:02:07 crc kubenswrapper[4768]: E1124 18:02:07.418558 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-memberlist podName:d1e6e133-4775-411b-b0e1-516e2cd2e276 nodeName:}" failed. No retries permitted until 2025-11-24 18:02:08.418539739 +0000 UTC m=+767.279121516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-memberlist") pod "speaker-xj9kr" (UID: "d1e6e133-4775-411b-b0e1-516e2cd2e276") : secret "metallb-memberlist" not found Nov 24 18:02:07 crc kubenswrapper[4768]: I1124 18:02:07.425543 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-metrics-certs\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:07 crc kubenswrapper[4768]: I1124 18:02:07.481458 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-8wcfs"] Nov 24 18:02:07 crc kubenswrapper[4768]: I1124 18:02:07.571857 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-br7zz" event={"ID":"cbca4cc0-b37d-4521-8c37-706beb2a4030","Type":"ContainerStarted","Data":"9aa29d2274047c5d00f5af752ac2aa9b9f14c23d7331baf77cba51f79fb3ca0b"} Nov 24 18:02:07 crc kubenswrapper[4768]: I1124 18:02:07.573002 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-8wcfs" event={"ID":"d270c276-5cc7-40cb-a690-27a3e3b5d29a","Type":"ContainerStarted","Data":"fdf687f935b760090d85d34bc3365893d2a5cdf24deb1bc937972a98d6112e78"} Nov 24 18:02:07 crc kubenswrapper[4768]: I1124 18:02:07.573837 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" event={"ID":"d52b407a-4b4f-47ce-9cc4-244b3fca2db4","Type":"ContainerStarted","Data":"0852e866d1341539a5f219a93b51604146c52fa921d5b52d5d50decf8ab76b38"} Nov 24 18:02:08 crc kubenswrapper[4768]: I1124 18:02:08.432716 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-memberlist\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:08 crc kubenswrapper[4768]: I1124 18:02:08.444411 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d1e6e133-4775-411b-b0e1-516e2cd2e276-memberlist\") pod \"speaker-xj9kr\" (UID: \"d1e6e133-4775-411b-b0e1-516e2cd2e276\") " pod="metallb-system/speaker-xj9kr" Nov 24 18:02:08 crc kubenswrapper[4768]: I1124 18:02:08.540747 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xj9kr" Nov 24 18:02:08 crc kubenswrapper[4768]: W1124 18:02:08.564858 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1e6e133_4775_411b_b0e1_516e2cd2e276.slice/crio-29c20c329d71856e0356fce124e22dc8290c0e4edd1816137de44949d7ee26cb WatchSource:0}: Error finding container 29c20c329d71856e0356fce124e22dc8290c0e4edd1816137de44949d7ee26cb: Status 404 returned error can't find the container with id 29c20c329d71856e0356fce124e22dc8290c0e4edd1816137de44949d7ee26cb Nov 24 18:02:08 crc kubenswrapper[4768]: I1124 18:02:08.582284 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xj9kr" event={"ID":"d1e6e133-4775-411b-b0e1-516e2cd2e276","Type":"ContainerStarted","Data":"29c20c329d71856e0356fce124e22dc8290c0e4edd1816137de44949d7ee26cb"} Nov 24 18:02:08 crc kubenswrapper[4768]: I1124 18:02:08.584555 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-8wcfs" event={"ID":"d270c276-5cc7-40cb-a690-27a3e3b5d29a","Type":"ContainerStarted","Data":"f3ec0cfbd63c5d434fb97a69b87ed53978c399d7fca9a81818b9d83429710a99"} Nov 24 18:02:08 crc kubenswrapper[4768]: I1124 18:02:08.584589 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-8wcfs" event={"ID":"d270c276-5cc7-40cb-a690-27a3e3b5d29a","Type":"ContainerStarted","Data":"bbfe4bb6012c0bd8e8ca152f3d4273f2e40abd18fdc1e4d27c6f20fefd2321be"} Nov 24 18:02:08 crc kubenswrapper[4768]: I1124 18:02:08.584839 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:09 crc kubenswrapper[4768]: I1124 18:02:09.602878 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xj9kr" event={"ID":"d1e6e133-4775-411b-b0e1-516e2cd2e276","Type":"ContainerStarted","Data":"b686c9b79cb5e2ac54141f35e7b5773df36c8b2a67361780beda7f01ee84aeb8"} Nov 24 18:02:09 crc kubenswrapper[4768]: I1124 18:02:09.603224 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xj9kr" event={"ID":"d1e6e133-4775-411b-b0e1-516e2cd2e276","Type":"ContainerStarted","Data":"1010c82e7141d90d03376508ff3c6f6490ba0c1832b6477ad76336ee7d178af3"} Nov 24 18:02:09 crc kubenswrapper[4768]: I1124 18:02:09.627881 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-8wcfs" podStartSLOduration=3.627862059 podStartE2EDuration="3.627862059s" podCreationTimestamp="2025-11-24 18:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:02:08.613256581 +0000 UTC m=+767.473838358" watchObservedRunningTime="2025-11-24 18:02:09.627862059 +0000 UTC m=+768.488443836" Nov 24 18:02:09 crc kubenswrapper[4768]: I1124 18:02:09.633461 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-xj9kr" podStartSLOduration=3.633446955 podStartE2EDuration="3.633446955s" podCreationTimestamp="2025-11-24 18:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:02:09.622618773 +0000 UTC m=+768.483200560" watchObservedRunningTime="2025-11-24 18:02:09.633446955 +0000 UTC m=+768.494028732" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.056090 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rkdrn"] Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.057410 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.071793 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkdrn"] Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.171624 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvtxq\" (UniqueName: \"kubernetes.io/projected/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-kube-api-access-dvtxq\") pod \"community-operators-rkdrn\" (UID: \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\") " pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.171681 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-utilities\") pod \"community-operators-rkdrn\" (UID: \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\") " pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.171923 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-catalog-content\") pod \"community-operators-rkdrn\" (UID: \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\") " pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.272935 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvtxq\" (UniqueName: \"kubernetes.io/projected/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-kube-api-access-dvtxq\") pod \"community-operators-rkdrn\" (UID: \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\") " pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.272994 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-utilities\") pod \"community-operators-rkdrn\" (UID: \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\") " pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.273055 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-catalog-content\") pod \"community-operators-rkdrn\" (UID: \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\") " pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.273478 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-catalog-content\") pod \"community-operators-rkdrn\" (UID: \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\") " pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.273679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-utilities\") pod \"community-operators-rkdrn\" (UID: \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\") " pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.310438 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvtxq\" (UniqueName: \"kubernetes.io/projected/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-kube-api-access-dvtxq\") pod \"community-operators-rkdrn\" (UID: \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\") " pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.372374 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.609611 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-xj9kr" Nov 24 18:02:10 crc kubenswrapper[4768]: I1124 18:02:10.953888 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkdrn"] Nov 24 18:02:10 crc kubenswrapper[4768]: W1124 18:02:10.969628 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34a8bbc8_7df6_4c77_b445_dd5a91a0b746.slice/crio-efee4b9e1856d9b24dc96ffe00925c95145eed583132f8723b87faaf5caf727a WatchSource:0}: Error finding container efee4b9e1856d9b24dc96ffe00925c95145eed583132f8723b87faaf5caf727a: Status 404 returned error can't find the container with id efee4b9e1856d9b24dc96ffe00925c95145eed583132f8723b87faaf5caf727a Nov 24 18:02:11 crc kubenswrapper[4768]: I1124 18:02:11.617201 4768 generic.go:334] "Generic (PLEG): container finished" podID="34a8bbc8-7df6-4c77-b445-dd5a91a0b746" containerID="9faecf6104f7b5de136981360a553243ee4a651e12e89294d72e780ae7946730" exitCode=0 Nov 24 18:02:11 crc kubenswrapper[4768]: I1124 18:02:11.617397 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkdrn" event={"ID":"34a8bbc8-7df6-4c77-b445-dd5a91a0b746","Type":"ContainerDied","Data":"9faecf6104f7b5de136981360a553243ee4a651e12e89294d72e780ae7946730"} Nov 24 18:02:11 crc kubenswrapper[4768]: I1124 18:02:11.617443 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkdrn" event={"ID":"34a8bbc8-7df6-4c77-b445-dd5a91a0b746","Type":"ContainerStarted","Data":"efee4b9e1856d9b24dc96ffe00925c95145eed583132f8723b87faaf5caf727a"} Nov 24 18:02:13 crc kubenswrapper[4768]: I1124 18:02:13.656308 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:02:13 crc kubenswrapper[4768]: I1124 18:02:13.656610 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:02:13 crc kubenswrapper[4768]: I1124 18:02:13.656651 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 18:02:13 crc kubenswrapper[4768]: I1124 18:02:13.657223 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4583a9ac279158eca4e8f57a4180ced088f2fed29490556a10e250154558a77"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 18:02:13 crc kubenswrapper[4768]: I1124 18:02:13.657269 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://b4583a9ac279158eca4e8f57a4180ced088f2fed29490556a10e250154558a77" gracePeriod=600 Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.411630 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wclk7"] Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.413019 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.436219 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wclk7"] Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.549159 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-catalog-content\") pod \"redhat-marketplace-wclk7\" (UID: \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\") " pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.549236 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mkr\" (UniqueName: \"kubernetes.io/projected/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-kube-api-access-f2mkr\") pod \"redhat-marketplace-wclk7\" (UID: \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\") " pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.549267 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-utilities\") pod \"redhat-marketplace-wclk7\" (UID: \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\") " pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.639495 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" event={"ID":"d52b407a-4b4f-47ce-9cc4-244b3fca2db4","Type":"ContainerStarted","Data":"abe54bdb34a4b8070859e6f227b48d0992de96a68f7bb793c8f34279ee0a66aa"} Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.639835 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.641202 4768 generic.go:334] "Generic (PLEG): container finished" podID="cbca4cc0-b37d-4521-8c37-706beb2a4030" containerID="60a20d67a1fa7e5288176863ddfa3b579c74e796ccb429e99e4924ff4debe37d" exitCode=0 Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.641308 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-br7zz" event={"ID":"cbca4cc0-b37d-4521-8c37-706beb2a4030","Type":"ContainerDied","Data":"60a20d67a1fa7e5288176863ddfa3b579c74e796ccb429e99e4924ff4debe37d"} Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.644310 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="b4583a9ac279158eca4e8f57a4180ced088f2fed29490556a10e250154558a77" exitCode=0 Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.644374 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"b4583a9ac279158eca4e8f57a4180ced088f2fed29490556a10e250154558a77"} Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.644430 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"5b1fcca249f25d296bfba4402fd65255a8a672ed04eb8c495487a6905cab2500"} Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.644454 4768 scope.go:117] "RemoveContainer" containerID="5b11ee9a43148b8f430bd2257b4fc5d4ab0802be7470cf787730b8c0e93d7060" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.646243 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkdrn" event={"ID":"34a8bbc8-7df6-4c77-b445-dd5a91a0b746","Type":"ContainerStarted","Data":"bfcc4ac18bede10585e7442f95734f1bd875403223a7ce38eac0f52dad432a01"} Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.650606 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-catalog-content\") pod \"redhat-marketplace-wclk7\" (UID: \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\") " pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.650678 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mkr\" (UniqueName: \"kubernetes.io/projected/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-kube-api-access-f2mkr\") pod \"redhat-marketplace-wclk7\" (UID: \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\") " pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.650705 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-utilities\") pod \"redhat-marketplace-wclk7\" (UID: \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\") " pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.651148 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-utilities\") pod \"redhat-marketplace-wclk7\" (UID: \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\") " pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.651220 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-catalog-content\") pod \"redhat-marketplace-wclk7\" (UID: \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\") " pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.667982 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" podStartSLOduration=1.73307948 podStartE2EDuration="8.667961282s" podCreationTimestamp="2025-11-24 18:02:06 +0000 UTC" firstStartedPulling="2025-11-24 18:02:07.368192731 +0000 UTC m=+766.228774508" lastFinishedPulling="2025-11-24 18:02:14.303074533 +0000 UTC m=+773.163656310" observedRunningTime="2025-11-24 18:02:14.663473096 +0000 UTC m=+773.524054883" watchObservedRunningTime="2025-11-24 18:02:14.667961282 +0000 UTC m=+773.528543069" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.669845 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mkr\" (UniqueName: \"kubernetes.io/projected/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-kube-api-access-f2mkr\") pod \"redhat-marketplace-wclk7\" (UID: \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\") " pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:14 crc kubenswrapper[4768]: I1124 18:02:14.751003 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:15 crc kubenswrapper[4768]: I1124 18:02:15.160601 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wclk7"] Nov 24 18:02:15 crc kubenswrapper[4768]: W1124 18:02:15.172893 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda12fd5fd_cf1c_47e8_b8cc_4ce03cf40987.slice/crio-4c67ae54a767bc96d1b228bf65e8c9f8e5014dc5c3c197818db42e0b0e46e8dc WatchSource:0}: Error finding container 4c67ae54a767bc96d1b228bf65e8c9f8e5014dc5c3c197818db42e0b0e46e8dc: Status 404 returned error can't find the container with id 4c67ae54a767bc96d1b228bf65e8c9f8e5014dc5c3c197818db42e0b0e46e8dc Nov 24 18:02:15 crc kubenswrapper[4768]: I1124 18:02:15.656480 4768 generic.go:334] "Generic (PLEG): container finished" podID="cbca4cc0-b37d-4521-8c37-706beb2a4030" containerID="339c7ee1703bc22d029f5a88745785e7e167bf882a3ae80f9e5e076388f7d2b8" exitCode=0 Nov 24 18:02:15 crc kubenswrapper[4768]: I1124 18:02:15.657225 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-br7zz" event={"ID":"cbca4cc0-b37d-4521-8c37-706beb2a4030","Type":"ContainerDied","Data":"339c7ee1703bc22d029f5a88745785e7e167bf882a3ae80f9e5e076388f7d2b8"} Nov 24 18:02:15 crc kubenswrapper[4768]: I1124 18:02:15.666974 4768 generic.go:334] "Generic (PLEG): container finished" podID="34a8bbc8-7df6-4c77-b445-dd5a91a0b746" containerID="bfcc4ac18bede10585e7442f95734f1bd875403223a7ce38eac0f52dad432a01" exitCode=0 Nov 24 18:02:15 crc kubenswrapper[4768]: I1124 18:02:15.667058 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkdrn" event={"ID":"34a8bbc8-7df6-4c77-b445-dd5a91a0b746","Type":"ContainerDied","Data":"bfcc4ac18bede10585e7442f95734f1bd875403223a7ce38eac0f52dad432a01"} Nov 24 18:02:15 crc kubenswrapper[4768]: I1124 18:02:15.669561 4768 generic.go:334] "Generic (PLEG): container finished" podID="a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" containerID="e332ff44189e5502ac79927767dc0ba7c921f9a29f14ab57ea47f17d53ca80a9" exitCode=0 Nov 24 18:02:15 crc kubenswrapper[4768]: I1124 18:02:15.669632 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wclk7" event={"ID":"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987","Type":"ContainerDied","Data":"e332ff44189e5502ac79927767dc0ba7c921f9a29f14ab57ea47f17d53ca80a9"} Nov 24 18:02:15 crc kubenswrapper[4768]: I1124 18:02:15.669709 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wclk7" event={"ID":"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987","Type":"ContainerStarted","Data":"4c67ae54a767bc96d1b228bf65e8c9f8e5014dc5c3c197818db42e0b0e46e8dc"} Nov 24 18:02:16 crc kubenswrapper[4768]: I1124 18:02:16.677038 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wclk7" event={"ID":"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987","Type":"ContainerStarted","Data":"7bbd7c84df6434296c0bd16ed861bcd754158b5bd51987ba3b8837b569cba10f"} Nov 24 18:02:16 crc kubenswrapper[4768]: I1124 18:02:16.679437 4768 generic.go:334] "Generic (PLEG): container finished" podID="cbca4cc0-b37d-4521-8c37-706beb2a4030" containerID="ccf16aad9dd4eccd1b9420089af82e315eba33dc8980a71b6d2dcfd1126f2a5d" exitCode=0 Nov 24 18:02:16 crc kubenswrapper[4768]: I1124 18:02:16.679517 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-br7zz" event={"ID":"cbca4cc0-b37d-4521-8c37-706beb2a4030","Type":"ContainerDied","Data":"ccf16aad9dd4eccd1b9420089af82e315eba33dc8980a71b6d2dcfd1126f2a5d"} Nov 24 18:02:16 crc kubenswrapper[4768]: I1124 18:02:16.681682 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkdrn" event={"ID":"34a8bbc8-7df6-4c77-b445-dd5a91a0b746","Type":"ContainerStarted","Data":"ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd"} Nov 24 18:02:16 crc kubenswrapper[4768]: I1124 18:02:16.716044 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rkdrn" podStartSLOduration=2.230118525 podStartE2EDuration="6.716019386s" podCreationTimestamp="2025-11-24 18:02:10 +0000 UTC" firstStartedPulling="2025-11-24 18:02:11.620537756 +0000 UTC m=+770.481119533" lastFinishedPulling="2025-11-24 18:02:16.106438617 +0000 UTC m=+774.967020394" observedRunningTime="2025-11-24 18:02:16.713852955 +0000 UTC m=+775.574434752" watchObservedRunningTime="2025-11-24 18:02:16.716019386 +0000 UTC m=+775.576601173" Nov 24 18:02:17 crc kubenswrapper[4768]: I1124 18:02:17.061551 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-8wcfs" Nov 24 18:02:17 crc kubenswrapper[4768]: I1124 18:02:17.691290 4768 generic.go:334] "Generic (PLEG): container finished" podID="a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" containerID="7bbd7c84df6434296c0bd16ed861bcd754158b5bd51987ba3b8837b569cba10f" exitCode=0 Nov 24 18:02:17 crc kubenswrapper[4768]: I1124 18:02:17.691394 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wclk7" event={"ID":"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987","Type":"ContainerDied","Data":"7bbd7c84df6434296c0bd16ed861bcd754158b5bd51987ba3b8837b569cba10f"} Nov 24 18:02:17 crc kubenswrapper[4768]: I1124 18:02:17.696939 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-br7zz" event={"ID":"cbca4cc0-b37d-4521-8c37-706beb2a4030","Type":"ContainerStarted","Data":"9089d2f2bb6b15954b4fc7a38000025a67c58cd7100121fc9dc15f9719dc8e4f"} Nov 24 18:02:17 crc kubenswrapper[4768]: I1124 18:02:17.696978 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-br7zz" event={"ID":"cbca4cc0-b37d-4521-8c37-706beb2a4030","Type":"ContainerStarted","Data":"00c0cdec9ec8fb2dae75e890650206c41bbf31b63f93a1737079e9ab4c5b8937"} Nov 24 18:02:17 crc kubenswrapper[4768]: I1124 18:02:17.696990 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-br7zz" event={"ID":"cbca4cc0-b37d-4521-8c37-706beb2a4030","Type":"ContainerStarted","Data":"2d776f4c582a68d59e0d5f21e9b7d250dc582a7debc22e4c361241aa35d974bb"} Nov 24 18:02:17 crc kubenswrapper[4768]: I1124 18:02:17.696998 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-br7zz" event={"ID":"cbca4cc0-b37d-4521-8c37-706beb2a4030","Type":"ContainerStarted","Data":"328c0f6bf468bcca19fccaaefe7f0fbfe0002bc208bbce5c960d242f6d87f4bc"} Nov 24 18:02:17 crc kubenswrapper[4768]: I1124 18:02:17.697006 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-br7zz" event={"ID":"cbca4cc0-b37d-4521-8c37-706beb2a4030","Type":"ContainerStarted","Data":"21275cc6a6b72692263aaacadc634d3aa52e37c78a273862aec297cb25a0b7dd"} Nov 24 18:02:18 crc kubenswrapper[4768]: I1124 18:02:18.977782 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-xj9kr" Nov 24 18:02:19 crc kubenswrapper[4768]: I1124 18:02:19.021730 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wclk7" event={"ID":"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987","Type":"ContainerStarted","Data":"aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f"} Nov 24 18:02:19 crc kubenswrapper[4768]: I1124 18:02:19.026645 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-br7zz" event={"ID":"cbca4cc0-b37d-4521-8c37-706beb2a4030","Type":"ContainerStarted","Data":"a107b8e34f21944ae5f5d0952b1a71ad515bbf51102ffdc03a8f33fab2476278"} Nov 24 18:02:19 crc kubenswrapper[4768]: I1124 18:02:19.026856 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:19 crc kubenswrapper[4768]: I1124 18:02:19.039356 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wclk7" podStartSLOduration=2.595636851 podStartE2EDuration="5.039338273s" podCreationTimestamp="2025-11-24 18:02:14 +0000 UTC" firstStartedPulling="2025-11-24 18:02:15.679650579 +0000 UTC m=+774.540232396" lastFinishedPulling="2025-11-24 18:02:18.123352041 +0000 UTC m=+776.983933818" observedRunningTime="2025-11-24 18:02:19.038769177 +0000 UTC m=+777.899350964" watchObservedRunningTime="2025-11-24 18:02:19.039338273 +0000 UTC m=+777.899920050" Nov 24 18:02:19 crc kubenswrapper[4768]: I1124 18:02:19.063822 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-br7zz" podStartSLOduration=5.870362758 podStartE2EDuration="13.063801107s" podCreationTimestamp="2025-11-24 18:02:06 +0000 UTC" firstStartedPulling="2025-11-24 18:02:07.080699885 +0000 UTC m=+765.941281652" lastFinishedPulling="2025-11-24 18:02:14.274138224 +0000 UTC m=+773.134720001" observedRunningTime="2025-11-24 18:02:19.060670809 +0000 UTC m=+777.921252596" watchObservedRunningTime="2025-11-24 18:02:19.063801107 +0000 UTC m=+777.924382884" Nov 24 18:02:20 crc kubenswrapper[4768]: I1124 18:02:20.373554 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:20 crc kubenswrapper[4768]: I1124 18:02:20.373641 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:20 crc kubenswrapper[4768]: I1124 18:02:20.438806 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:21 crc kubenswrapper[4768]: I1124 18:02:21.074929 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:21 crc kubenswrapper[4768]: I1124 18:02:21.955678 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:22 crc kubenswrapper[4768]: I1124 18:02:22.011444 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.202782 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkdrn"] Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.203356 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rkdrn" podUID="34a8bbc8-7df6-4c77-b445-dd5a91a0b746" containerName="registry-server" containerID="cri-o://ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd" gracePeriod=2 Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.657124 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.751579 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.751624 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.799106 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.855452 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-utilities\") pod \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\" (UID: \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\") " Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.855583 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-catalog-content\") pod \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\" (UID: \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\") " Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.855655 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvtxq\" (UniqueName: \"kubernetes.io/projected/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-kube-api-access-dvtxq\") pod \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\" (UID: \"34a8bbc8-7df6-4c77-b445-dd5a91a0b746\") " Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.856345 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-utilities" (OuterVolumeSpecName: "utilities") pod "34a8bbc8-7df6-4c77-b445-dd5a91a0b746" (UID: "34a8bbc8-7df6-4c77-b445-dd5a91a0b746"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.856652 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.860842 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-kube-api-access-dvtxq" (OuterVolumeSpecName: "kube-api-access-dvtxq") pod "34a8bbc8-7df6-4c77-b445-dd5a91a0b746" (UID: "34a8bbc8-7df6-4c77-b445-dd5a91a0b746"). InnerVolumeSpecName "kube-api-access-dvtxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.905963 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34a8bbc8-7df6-4c77-b445-dd5a91a0b746" (UID: "34a8bbc8-7df6-4c77-b445-dd5a91a0b746"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.957712 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:02:24 crc kubenswrapper[4768]: I1124 18:02:24.957748 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvtxq\" (UniqueName: \"kubernetes.io/projected/34a8bbc8-7df6-4c77-b445-dd5a91a0b746-kube-api-access-dvtxq\") on node \"crc\" DevicePath \"\"" Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.062555 4768 generic.go:334] "Generic (PLEG): container finished" podID="34a8bbc8-7df6-4c77-b445-dd5a91a0b746" containerID="ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd" exitCode=0 Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.062603 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkdrn" event={"ID":"34a8bbc8-7df6-4c77-b445-dd5a91a0b746","Type":"ContainerDied","Data":"ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd"} Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.062652 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkdrn" event={"ID":"34a8bbc8-7df6-4c77-b445-dd5a91a0b746","Type":"ContainerDied","Data":"efee4b9e1856d9b24dc96ffe00925c95145eed583132f8723b87faaf5caf727a"} Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.062676 4768 scope.go:117] "RemoveContainer" containerID="ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd" Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.062793 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkdrn" Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.082599 4768 scope.go:117] "RemoveContainer" containerID="bfcc4ac18bede10585e7442f95734f1bd875403223a7ce38eac0f52dad432a01" Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.094949 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkdrn"] Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.098726 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rkdrn"] Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.120697 4768 scope.go:117] "RemoveContainer" containerID="9faecf6104f7b5de136981360a553243ee4a651e12e89294d72e780ae7946730" Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.126832 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.136904 4768 scope.go:117] "RemoveContainer" containerID="ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd" Nov 24 18:02:25 crc kubenswrapper[4768]: E1124 18:02:25.137749 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd\": container with ID starting with ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd not found: ID does not exist" containerID="ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd" Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.137816 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd"} err="failed to get container status \"ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd\": rpc error: code = NotFound desc = could not find container \"ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd\": container with ID starting with ea159ba64bf3ec1b749e28b395f611e034c39e907bef05934160c8ac6aa901dd not found: ID does not exist" Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.137862 4768 scope.go:117] "RemoveContainer" containerID="bfcc4ac18bede10585e7442f95734f1bd875403223a7ce38eac0f52dad432a01" Nov 24 18:02:25 crc kubenswrapper[4768]: E1124 18:02:25.138420 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfcc4ac18bede10585e7442f95734f1bd875403223a7ce38eac0f52dad432a01\": container with ID starting with bfcc4ac18bede10585e7442f95734f1bd875403223a7ce38eac0f52dad432a01 not found: ID does not exist" containerID="bfcc4ac18bede10585e7442f95734f1bd875403223a7ce38eac0f52dad432a01" Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.138540 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfcc4ac18bede10585e7442f95734f1bd875403223a7ce38eac0f52dad432a01"} err="failed to get container status \"bfcc4ac18bede10585e7442f95734f1bd875403223a7ce38eac0f52dad432a01\": rpc error: code = NotFound desc = could not find container \"bfcc4ac18bede10585e7442f95734f1bd875403223a7ce38eac0f52dad432a01\": container with ID starting with bfcc4ac18bede10585e7442f95734f1bd875403223a7ce38eac0f52dad432a01 not found: ID does not exist" Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.138593 4768 scope.go:117] "RemoveContainer" containerID="9faecf6104f7b5de136981360a553243ee4a651e12e89294d72e780ae7946730" Nov 24 18:02:25 crc kubenswrapper[4768]: E1124 18:02:25.138988 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9faecf6104f7b5de136981360a553243ee4a651e12e89294d72e780ae7946730\": container with ID starting with 9faecf6104f7b5de136981360a553243ee4a651e12e89294d72e780ae7946730 not found: ID does not exist" containerID="9faecf6104f7b5de136981360a553243ee4a651e12e89294d72e780ae7946730" Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.139038 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9faecf6104f7b5de136981360a553243ee4a651e12e89294d72e780ae7946730"} err="failed to get container status \"9faecf6104f7b5de136981360a553243ee4a651e12e89294d72e780ae7946730\": rpc error: code = NotFound desc = could not find container \"9faecf6104f7b5de136981360a553243ee4a651e12e89294d72e780ae7946730\": container with ID starting with 9faecf6104f7b5de136981360a553243ee4a651e12e89294d72e780ae7946730 not found: ID does not exist" Nov 24 18:02:25 crc kubenswrapper[4768]: I1124 18:02:25.909131 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a8bbc8-7df6-4c77-b445-dd5a91a0b746" path="/var/lib/kubelet/pods/34a8bbc8-7df6-4c77-b445-dd5a91a0b746/volumes" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.007093 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xx6dm"] Nov 24 18:02:26 crc kubenswrapper[4768]: E1124 18:02:26.007336 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a8bbc8-7df6-4c77-b445-dd5a91a0b746" containerName="extract-utilities" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.007348 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a8bbc8-7df6-4c77-b445-dd5a91a0b746" containerName="extract-utilities" Nov 24 18:02:26 crc kubenswrapper[4768]: E1124 18:02:26.007364 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a8bbc8-7df6-4c77-b445-dd5a91a0b746" containerName="extract-content" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.007372 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a8bbc8-7df6-4c77-b445-dd5a91a0b746" containerName="extract-content" Nov 24 18:02:26 crc kubenswrapper[4768]: E1124 18:02:26.007396 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a8bbc8-7df6-4c77-b445-dd5a91a0b746" containerName="registry-server" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.007403 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a8bbc8-7df6-4c77-b445-dd5a91a0b746" containerName="registry-server" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.007526 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a8bbc8-7df6-4c77-b445-dd5a91a0b746" containerName="registry-server" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.007966 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xx6dm" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.009821 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-4lsql" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.010693 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.010815 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.018790 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xx6dm"] Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.084143 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdtj\" (UniqueName: \"kubernetes.io/projected/911161df-90b7-4df2-93d4-9e91b2bf2e91-kube-api-access-xgdtj\") pod \"openstack-operator-index-xx6dm\" (UID: \"911161df-90b7-4df2-93d4-9e91b2bf2e91\") " pod="openstack-operators/openstack-operator-index-xx6dm" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.185711 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgdtj\" (UniqueName: \"kubernetes.io/projected/911161df-90b7-4df2-93d4-9e91b2bf2e91-kube-api-access-xgdtj\") pod \"openstack-operator-index-xx6dm\" (UID: \"911161df-90b7-4df2-93d4-9e91b2bf2e91\") " pod="openstack-operators/openstack-operator-index-xx6dm" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.204792 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgdtj\" (UniqueName: \"kubernetes.io/projected/911161df-90b7-4df2-93d4-9e91b2bf2e91-kube-api-access-xgdtj\") pod \"openstack-operator-index-xx6dm\" (UID: \"911161df-90b7-4df2-93d4-9e91b2bf2e91\") " pod="openstack-operators/openstack-operator-index-xx6dm" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.322825 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xx6dm" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.714633 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xx6dm"] Nov 24 18:02:26 crc kubenswrapper[4768]: W1124 18:02:26.715105 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod911161df_90b7_4df2_93d4_9e91b2bf2e91.slice/crio-ab637b8d6e5db9e80e894d95dbc9cc7bbd761a212c2059273d3bed1c163e5bb2 WatchSource:0}: Error finding container ab637b8d6e5db9e80e894d95dbc9cc7bbd761a212c2059273d3bed1c163e5bb2: Status 404 returned error can't find the container with id ab637b8d6e5db9e80e894d95dbc9cc7bbd761a212c2059273d3bed1c163e5bb2 Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.959867 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-br7zz" Nov 24 18:02:26 crc kubenswrapper[4768]: I1124 18:02:26.969774 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-bmlh2" Nov 24 18:02:27 crc kubenswrapper[4768]: I1124 18:02:27.078272 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xx6dm" event={"ID":"911161df-90b7-4df2-93d4-9e91b2bf2e91","Type":"ContainerStarted","Data":"ab637b8d6e5db9e80e894d95dbc9cc7bbd761a212c2059273d3bed1c163e5bb2"} Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.008731 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wclk7"] Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.009640 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wclk7" podUID="a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" containerName="registry-server" containerID="cri-o://aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f" gracePeriod=2 Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.104996 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xx6dm" event={"ID":"911161df-90b7-4df2-93d4-9e91b2bf2e91","Type":"ContainerStarted","Data":"d4b08f586d4dca37dbea30ccfde23bad471e48d6647bde6822a7b1b7d1a5c81a"} Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.123919 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xx6dm" podStartSLOduration=2.7495642179999997 podStartE2EDuration="6.123902602s" podCreationTimestamp="2025-11-24 18:02:25 +0000 UTC" firstStartedPulling="2025-11-24 18:02:26.718706157 +0000 UTC m=+785.579287934" lastFinishedPulling="2025-11-24 18:02:30.093044541 +0000 UTC m=+788.953626318" observedRunningTime="2025-11-24 18:02:31.122352866 +0000 UTC m=+789.982934653" watchObservedRunningTime="2025-11-24 18:02:31.123902602 +0000 UTC m=+789.984484379" Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.430988 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.458691 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-utilities\") pod \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\" (UID: \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\") " Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.458783 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2mkr\" (UniqueName: \"kubernetes.io/projected/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-kube-api-access-f2mkr\") pod \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\" (UID: \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\") " Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.458821 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-catalog-content\") pod \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\" (UID: \"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987\") " Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.459591 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-utilities" (OuterVolumeSpecName: "utilities") pod "a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" (UID: "a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.464006 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-kube-api-access-f2mkr" (OuterVolumeSpecName: "kube-api-access-f2mkr") pod "a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" (UID: "a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987"). InnerVolumeSpecName "kube-api-access-f2mkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.487155 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" (UID: "a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.560197 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2mkr\" (UniqueName: \"kubernetes.io/projected/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-kube-api-access-f2mkr\") on node \"crc\" DevicePath \"\"" Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.560236 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:02:31 crc kubenswrapper[4768]: I1124 18:02:31.560246 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.115995 4768 generic.go:334] "Generic (PLEG): container finished" podID="a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" containerID="aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f" exitCode=0 Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.116077 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wclk7" Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.116129 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wclk7" event={"ID":"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987","Type":"ContainerDied","Data":"aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f"} Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.116201 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wclk7" event={"ID":"a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987","Type":"ContainerDied","Data":"4c67ae54a767bc96d1b228bf65e8c9f8e5014dc5c3c197818db42e0b0e46e8dc"} Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.116237 4768 scope.go:117] "RemoveContainer" containerID="aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f" Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.135454 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wclk7"] Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.141228 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wclk7"] Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.142596 4768 scope.go:117] "RemoveContainer" containerID="7bbd7c84df6434296c0bd16ed861bcd754158b5bd51987ba3b8837b569cba10f" Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.159636 4768 scope.go:117] "RemoveContainer" containerID="e332ff44189e5502ac79927767dc0ba7c921f9a29f14ab57ea47f17d53ca80a9" Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.192504 4768 scope.go:117] "RemoveContainer" containerID="aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f" Nov 24 18:02:32 crc kubenswrapper[4768]: E1124 18:02:32.193046 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f\": container with ID starting with aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f not found: ID does not exist" containerID="aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f" Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.193086 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f"} err="failed to get container status \"aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f\": rpc error: code = NotFound desc = could not find container \"aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f\": container with ID starting with aba51498985b352fd5d0adf7d57e4423ea5b18296f1d0941cc46f3747e65564f not found: ID does not exist" Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.193111 4768 scope.go:117] "RemoveContainer" containerID="7bbd7c84df6434296c0bd16ed861bcd754158b5bd51987ba3b8837b569cba10f" Nov 24 18:02:32 crc kubenswrapper[4768]: E1124 18:02:32.193581 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bbd7c84df6434296c0bd16ed861bcd754158b5bd51987ba3b8837b569cba10f\": container with ID starting with 7bbd7c84df6434296c0bd16ed861bcd754158b5bd51987ba3b8837b569cba10f not found: ID does not exist" containerID="7bbd7c84df6434296c0bd16ed861bcd754158b5bd51987ba3b8837b569cba10f" Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.193630 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbd7c84df6434296c0bd16ed861bcd754158b5bd51987ba3b8837b569cba10f"} err="failed to get container status \"7bbd7c84df6434296c0bd16ed861bcd754158b5bd51987ba3b8837b569cba10f\": rpc error: code = NotFound desc = could not find container \"7bbd7c84df6434296c0bd16ed861bcd754158b5bd51987ba3b8837b569cba10f\": container with ID starting with 7bbd7c84df6434296c0bd16ed861bcd754158b5bd51987ba3b8837b569cba10f not found: ID does not exist" Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.193658 4768 scope.go:117] "RemoveContainer" containerID="e332ff44189e5502ac79927767dc0ba7c921f9a29f14ab57ea47f17d53ca80a9" Nov 24 18:02:32 crc kubenswrapper[4768]: E1124 18:02:32.194202 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e332ff44189e5502ac79927767dc0ba7c921f9a29f14ab57ea47f17d53ca80a9\": container with ID starting with e332ff44189e5502ac79927767dc0ba7c921f9a29f14ab57ea47f17d53ca80a9 not found: ID does not exist" containerID="e332ff44189e5502ac79927767dc0ba7c921f9a29f14ab57ea47f17d53ca80a9" Nov 24 18:02:32 crc kubenswrapper[4768]: I1124 18:02:32.194235 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e332ff44189e5502ac79927767dc0ba7c921f9a29f14ab57ea47f17d53ca80a9"} err="failed to get container status \"e332ff44189e5502ac79927767dc0ba7c921f9a29f14ab57ea47f17d53ca80a9\": rpc error: code = NotFound desc = could not find container \"e332ff44189e5502ac79927767dc0ba7c921f9a29f14ab57ea47f17d53ca80a9\": container with ID starting with e332ff44189e5502ac79927767dc0ba7c921f9a29f14ab57ea47f17d53ca80a9 not found: ID does not exist" Nov 24 18:02:33 crc kubenswrapper[4768]: I1124 18:02:33.905581 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" path="/var/lib/kubelet/pods/a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987/volumes" Nov 24 18:02:36 crc kubenswrapper[4768]: I1124 18:02:36.323856 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xx6dm" Nov 24 18:02:36 crc kubenswrapper[4768]: I1124 18:02:36.323971 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xx6dm" Nov 24 18:02:36 crc kubenswrapper[4768]: I1124 18:02:36.361223 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xx6dm" Nov 24 18:02:37 crc kubenswrapper[4768]: I1124 18:02:37.183137 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xx6dm" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.058758 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn"] Nov 24 18:02:39 crc kubenswrapper[4768]: E1124 18:02:39.059029 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" containerName="extract-content" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.059042 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" containerName="extract-content" Nov 24 18:02:39 crc kubenswrapper[4768]: E1124 18:02:39.059059 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" containerName="registry-server" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.059067 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" containerName="registry-server" Nov 24 18:02:39 crc kubenswrapper[4768]: E1124 18:02:39.059090 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" containerName="extract-utilities" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.059098 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" containerName="extract-utilities" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.059234 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12fd5fd-cf1c-47e8-b8cc-4ce03cf40987" containerName="registry-server" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.060330 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.062716 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-p4bhg" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.069609 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn"] Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.165427 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/339cc82e-8ca6-4822-b5b5-48be6f45f30c-bundle\") pod \"97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn\" (UID: \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\") " pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.165614 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/339cc82e-8ca6-4822-b5b5-48be6f45f30c-util\") pod \"97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn\" (UID: \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\") " pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.165729 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vfq\" (UniqueName: \"kubernetes.io/projected/339cc82e-8ca6-4822-b5b5-48be6f45f30c-kube-api-access-l8vfq\") pod \"97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn\" (UID: \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\") " pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.267294 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/339cc82e-8ca6-4822-b5b5-48be6f45f30c-util\") pod \"97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn\" (UID: \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\") " pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.267643 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8vfq\" (UniqueName: \"kubernetes.io/projected/339cc82e-8ca6-4822-b5b5-48be6f45f30c-kube-api-access-l8vfq\") pod \"97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn\" (UID: \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\") " pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.267811 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/339cc82e-8ca6-4822-b5b5-48be6f45f30c-bundle\") pod \"97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn\" (UID: \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\") " pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.267921 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/339cc82e-8ca6-4822-b5b5-48be6f45f30c-util\") pod \"97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn\" (UID: \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\") " pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.268424 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/339cc82e-8ca6-4822-b5b5-48be6f45f30c-bundle\") pod \"97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn\" (UID: \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\") " pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.293359 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8vfq\" (UniqueName: \"kubernetes.io/projected/339cc82e-8ca6-4822-b5b5-48be6f45f30c-kube-api-access-l8vfq\") pod \"97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn\" (UID: \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\") " pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.381506 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:39 crc kubenswrapper[4768]: I1124 18:02:39.797856 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn"] Nov 24 18:02:39 crc kubenswrapper[4768]: W1124 18:02:39.803160 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod339cc82e_8ca6_4822_b5b5_48be6f45f30c.slice/crio-66251cc884b29da3d04c8f1c4a9ece958281db6b3ca99582528c83add9f823dd WatchSource:0}: Error finding container 66251cc884b29da3d04c8f1c4a9ece958281db6b3ca99582528c83add9f823dd: Status 404 returned error can't find the container with id 66251cc884b29da3d04c8f1c4a9ece958281db6b3ca99582528c83add9f823dd Nov 24 18:02:40 crc kubenswrapper[4768]: I1124 18:02:40.168551 4768 generic.go:334] "Generic (PLEG): container finished" podID="339cc82e-8ca6-4822-b5b5-48be6f45f30c" containerID="57c477bc7d5fc5380189a24b20aae5c7d77e06acfadd4dc70f9a5aa636eb5aed" exitCode=0 Nov 24 18:02:40 crc kubenswrapper[4768]: I1124 18:02:40.168599 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" event={"ID":"339cc82e-8ca6-4822-b5b5-48be6f45f30c","Type":"ContainerDied","Data":"57c477bc7d5fc5380189a24b20aae5c7d77e06acfadd4dc70f9a5aa636eb5aed"} Nov 24 18:02:40 crc kubenswrapper[4768]: I1124 18:02:40.168633 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" event={"ID":"339cc82e-8ca6-4822-b5b5-48be6f45f30c","Type":"ContainerStarted","Data":"66251cc884b29da3d04c8f1c4a9ece958281db6b3ca99582528c83add9f823dd"} Nov 24 18:02:41 crc kubenswrapper[4768]: I1124 18:02:41.177822 4768 generic.go:334] "Generic (PLEG): container finished" podID="339cc82e-8ca6-4822-b5b5-48be6f45f30c" containerID="525fc1db4704bd947fdc0d47eab242be9402482a7212db72ff28a112ebb70dbd" exitCode=0 Nov 24 18:02:41 crc kubenswrapper[4768]: I1124 18:02:41.177871 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" event={"ID":"339cc82e-8ca6-4822-b5b5-48be6f45f30c","Type":"ContainerDied","Data":"525fc1db4704bd947fdc0d47eab242be9402482a7212db72ff28a112ebb70dbd"} Nov 24 18:02:42 crc kubenswrapper[4768]: I1124 18:02:42.187015 4768 generic.go:334] "Generic (PLEG): container finished" podID="339cc82e-8ca6-4822-b5b5-48be6f45f30c" containerID="ea473ec94de7784f141912807305f336db4b5c68caee562490a9d7cd37131beb" exitCode=0 Nov 24 18:02:42 crc kubenswrapper[4768]: I1124 18:02:42.187058 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" event={"ID":"339cc82e-8ca6-4822-b5b5-48be6f45f30c","Type":"ContainerDied","Data":"ea473ec94de7784f141912807305f336db4b5c68caee562490a9d7cd37131beb"} Nov 24 18:02:43 crc kubenswrapper[4768]: I1124 18:02:43.481112 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:43 crc kubenswrapper[4768]: I1124 18:02:43.525705 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/339cc82e-8ca6-4822-b5b5-48be6f45f30c-util\") pod \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\" (UID: \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\") " Nov 24 18:02:43 crc kubenswrapper[4768]: I1124 18:02:43.525770 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8vfq\" (UniqueName: \"kubernetes.io/projected/339cc82e-8ca6-4822-b5b5-48be6f45f30c-kube-api-access-l8vfq\") pod \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\" (UID: \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\") " Nov 24 18:02:43 crc kubenswrapper[4768]: I1124 18:02:43.525933 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/339cc82e-8ca6-4822-b5b5-48be6f45f30c-bundle\") pod \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\" (UID: \"339cc82e-8ca6-4822-b5b5-48be6f45f30c\") " Nov 24 18:02:43 crc kubenswrapper[4768]: I1124 18:02:43.526645 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339cc82e-8ca6-4822-b5b5-48be6f45f30c-bundle" (OuterVolumeSpecName: "bundle") pod "339cc82e-8ca6-4822-b5b5-48be6f45f30c" (UID: "339cc82e-8ca6-4822-b5b5-48be6f45f30c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:02:43 crc kubenswrapper[4768]: I1124 18:02:43.530933 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339cc82e-8ca6-4822-b5b5-48be6f45f30c-kube-api-access-l8vfq" (OuterVolumeSpecName: "kube-api-access-l8vfq") pod "339cc82e-8ca6-4822-b5b5-48be6f45f30c" (UID: "339cc82e-8ca6-4822-b5b5-48be6f45f30c"). InnerVolumeSpecName "kube-api-access-l8vfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:02:43 crc kubenswrapper[4768]: I1124 18:02:43.542349 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339cc82e-8ca6-4822-b5b5-48be6f45f30c-util" (OuterVolumeSpecName: "util") pod "339cc82e-8ca6-4822-b5b5-48be6f45f30c" (UID: "339cc82e-8ca6-4822-b5b5-48be6f45f30c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:02:43 crc kubenswrapper[4768]: I1124 18:02:43.627229 4768 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/339cc82e-8ca6-4822-b5b5-48be6f45f30c-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:02:43 crc kubenswrapper[4768]: I1124 18:02:43.627266 4768 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/339cc82e-8ca6-4822-b5b5-48be6f45f30c-util\") on node \"crc\" DevicePath \"\"" Nov 24 18:02:43 crc kubenswrapper[4768]: I1124 18:02:43.627282 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8vfq\" (UniqueName: \"kubernetes.io/projected/339cc82e-8ca6-4822-b5b5-48be6f45f30c-kube-api-access-l8vfq\") on node \"crc\" DevicePath \"\"" Nov 24 18:02:44 crc kubenswrapper[4768]: I1124 18:02:44.201113 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" event={"ID":"339cc82e-8ca6-4822-b5b5-48be6f45f30c","Type":"ContainerDied","Data":"66251cc884b29da3d04c8f1c4a9ece958281db6b3ca99582528c83add9f823dd"} Nov 24 18:02:44 crc kubenswrapper[4768]: I1124 18:02:44.201152 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66251cc884b29da3d04c8f1c4a9ece958281db6b3ca99582528c83add9f823dd" Nov 24 18:02:44 crc kubenswrapper[4768]: I1124 18:02:44.201193 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn" Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.301079 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf"] Nov 24 18:02:50 crc kubenswrapper[4768]: E1124 18:02:50.301778 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339cc82e-8ca6-4822-b5b5-48be6f45f30c" containerName="extract" Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.301790 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="339cc82e-8ca6-4822-b5b5-48be6f45f30c" containerName="extract" Nov 24 18:02:50 crc kubenswrapper[4768]: E1124 18:02:50.301805 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339cc82e-8ca6-4822-b5b5-48be6f45f30c" containerName="pull" Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.301812 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="339cc82e-8ca6-4822-b5b5-48be6f45f30c" containerName="pull" Nov 24 18:02:50 crc kubenswrapper[4768]: E1124 18:02:50.301832 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339cc82e-8ca6-4822-b5b5-48be6f45f30c" containerName="util" Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.301837 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="339cc82e-8ca6-4822-b5b5-48be6f45f30c" containerName="util" Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.301956 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="339cc82e-8ca6-4822-b5b5-48be6f45f30c" containerName="extract" Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.302436 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf" Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.305427 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-nd6lb" Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.326335 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf"] Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.417872 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsg6f\" (UniqueName: \"kubernetes.io/projected/029c591e-99fb-494c-93f1-c695b2b8b744-kube-api-access-rsg6f\") pod \"openstack-operator-controller-operator-7b874cbcf5-5ssbf\" (UID: \"029c591e-99fb-494c-93f1-c695b2b8b744\") " pod="openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf" Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.519061 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsg6f\" (UniqueName: \"kubernetes.io/projected/029c591e-99fb-494c-93f1-c695b2b8b744-kube-api-access-rsg6f\") pod \"openstack-operator-controller-operator-7b874cbcf5-5ssbf\" (UID: \"029c591e-99fb-494c-93f1-c695b2b8b744\") " pod="openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf" Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.539795 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsg6f\" (UniqueName: \"kubernetes.io/projected/029c591e-99fb-494c-93f1-c695b2b8b744-kube-api-access-rsg6f\") pod \"openstack-operator-controller-operator-7b874cbcf5-5ssbf\" (UID: \"029c591e-99fb-494c-93f1-c695b2b8b744\") " pod="openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf" Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.618310 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf" Nov 24 18:02:50 crc kubenswrapper[4768]: I1124 18:02:50.818935 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf"] Nov 24 18:02:50 crc kubenswrapper[4768]: W1124 18:02:50.826717 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod029c591e_99fb_494c_93f1_c695b2b8b744.slice/crio-bb2de1b538d927c07488cb75f3c76932c84146bbd1af8843b5cc7e89ffb36ea8 WatchSource:0}: Error finding container bb2de1b538d927c07488cb75f3c76932c84146bbd1af8843b5cc7e89ffb36ea8: Status 404 returned error can't find the container with id bb2de1b538d927c07488cb75f3c76932c84146bbd1af8843b5cc7e89ffb36ea8 Nov 24 18:02:51 crc kubenswrapper[4768]: I1124 18:02:51.246555 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf" event={"ID":"029c591e-99fb-494c-93f1-c695b2b8b744","Type":"ContainerStarted","Data":"bb2de1b538d927c07488cb75f3c76932c84146bbd1af8843b5cc7e89ffb36ea8"} Nov 24 18:02:55 crc kubenswrapper[4768]: I1124 18:02:55.286824 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf" event={"ID":"029c591e-99fb-494c-93f1-c695b2b8b744","Type":"ContainerStarted","Data":"2985dd77a5d2188e75e7c6d45e529b1212691cadc899aadda5560a598e88f4c0"} Nov 24 18:02:55 crc kubenswrapper[4768]: I1124 18:02:55.287190 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf" Nov 24 18:02:55 crc kubenswrapper[4768]: I1124 18:02:55.316842 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf" podStartSLOduration=1.725388652 podStartE2EDuration="5.316821548s" podCreationTimestamp="2025-11-24 18:02:50 +0000 UTC" firstStartedPulling="2025-11-24 18:02:50.828883072 +0000 UTC m=+809.689464839" lastFinishedPulling="2025-11-24 18:02:54.420315958 +0000 UTC m=+813.280897735" observedRunningTime="2025-11-24 18:02:55.313989612 +0000 UTC m=+814.174571399" watchObservedRunningTime="2025-11-24 18:02:55.316821548 +0000 UTC m=+814.177403345" Nov 24 18:03:00 crc kubenswrapper[4768]: I1124 18:03:00.621671 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7b874cbcf5-5ssbf" Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.488248 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dzlt7"] Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.490005 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.507557 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzlt7"] Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.574371 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e67aebe-8102-4767-9d4a-00c5e0317271-catalog-content\") pod \"redhat-operators-dzlt7\" (UID: \"5e67aebe-8102-4767-9d4a-00c5e0317271\") " pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.574860 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e67aebe-8102-4767-9d4a-00c5e0317271-utilities\") pod \"redhat-operators-dzlt7\" (UID: \"5e67aebe-8102-4767-9d4a-00c5e0317271\") " pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.574919 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs9nn\" (UniqueName: \"kubernetes.io/projected/5e67aebe-8102-4767-9d4a-00c5e0317271-kube-api-access-qs9nn\") pod \"redhat-operators-dzlt7\" (UID: \"5e67aebe-8102-4767-9d4a-00c5e0317271\") " pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.676133 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e67aebe-8102-4767-9d4a-00c5e0317271-catalog-content\") pod \"redhat-operators-dzlt7\" (UID: \"5e67aebe-8102-4767-9d4a-00c5e0317271\") " pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.676184 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e67aebe-8102-4767-9d4a-00c5e0317271-utilities\") pod \"redhat-operators-dzlt7\" (UID: \"5e67aebe-8102-4767-9d4a-00c5e0317271\") " pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.676233 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs9nn\" (UniqueName: \"kubernetes.io/projected/5e67aebe-8102-4767-9d4a-00c5e0317271-kube-api-access-qs9nn\") pod \"redhat-operators-dzlt7\" (UID: \"5e67aebe-8102-4767-9d4a-00c5e0317271\") " pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.676678 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e67aebe-8102-4767-9d4a-00c5e0317271-utilities\") pod \"redhat-operators-dzlt7\" (UID: \"5e67aebe-8102-4767-9d4a-00c5e0317271\") " pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.676734 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e67aebe-8102-4767-9d4a-00c5e0317271-catalog-content\") pod \"redhat-operators-dzlt7\" (UID: \"5e67aebe-8102-4767-9d4a-00c5e0317271\") " pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.707640 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs9nn\" (UniqueName: \"kubernetes.io/projected/5e67aebe-8102-4767-9d4a-00c5e0317271-kube-api-access-qs9nn\") pod \"redhat-operators-dzlt7\" (UID: \"5e67aebe-8102-4767-9d4a-00c5e0317271\") " pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:08 crc kubenswrapper[4768]: I1124 18:03:08.804618 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:09 crc kubenswrapper[4768]: I1124 18:03:09.146459 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzlt7"] Nov 24 18:03:09 crc kubenswrapper[4768]: W1124 18:03:09.151940 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e67aebe_8102_4767_9d4a_00c5e0317271.slice/crio-0f213a9f6ab918cc001a7b7d9302fc3abc49d4262d037157ed40f1bc4aa4e6eb WatchSource:0}: Error finding container 0f213a9f6ab918cc001a7b7d9302fc3abc49d4262d037157ed40f1bc4aa4e6eb: Status 404 returned error can't find the container with id 0f213a9f6ab918cc001a7b7d9302fc3abc49d4262d037157ed40f1bc4aa4e6eb Nov 24 18:03:09 crc kubenswrapper[4768]: I1124 18:03:09.392368 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlt7" event={"ID":"5e67aebe-8102-4767-9d4a-00c5e0317271","Type":"ContainerStarted","Data":"8012cc4857fc2773853df36a6b23e6f5c1ed6053c564d2922b3f98331e4b6046"} Nov 24 18:03:09 crc kubenswrapper[4768]: I1124 18:03:09.392410 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlt7" event={"ID":"5e67aebe-8102-4767-9d4a-00c5e0317271","Type":"ContainerStarted","Data":"0f213a9f6ab918cc001a7b7d9302fc3abc49d4262d037157ed40f1bc4aa4e6eb"} Nov 24 18:03:10 crc kubenswrapper[4768]: I1124 18:03:10.399043 4768 generic.go:334] "Generic (PLEG): container finished" podID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerID="8012cc4857fc2773853df36a6b23e6f5c1ed6053c564d2922b3f98331e4b6046" exitCode=0 Nov 24 18:03:10 crc kubenswrapper[4768]: I1124 18:03:10.399130 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlt7" event={"ID":"5e67aebe-8102-4767-9d4a-00c5e0317271","Type":"ContainerDied","Data":"8012cc4857fc2773853df36a6b23e6f5c1ed6053c564d2922b3f98331e4b6046"} Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.249708 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6jjlh"] Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.252511 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.260425 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6jjlh"] Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.324507 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7447f851-9eef-48b9-849e-ac7a51793472-catalog-content\") pod \"certified-operators-6jjlh\" (UID: \"7447f851-9eef-48b9-849e-ac7a51793472\") " pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.324696 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49j48\" (UniqueName: \"kubernetes.io/projected/7447f851-9eef-48b9-849e-ac7a51793472-kube-api-access-49j48\") pod \"certified-operators-6jjlh\" (UID: \"7447f851-9eef-48b9-849e-ac7a51793472\") " pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.324878 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7447f851-9eef-48b9-849e-ac7a51793472-utilities\") pod \"certified-operators-6jjlh\" (UID: \"7447f851-9eef-48b9-849e-ac7a51793472\") " pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.413885 4768 generic.go:334] "Generic (PLEG): container finished" podID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerID="55ec5d4f13ae3252bbac738285e62133b4dd2e06494655b8977d17833134daa5" exitCode=0 Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.413945 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlt7" event={"ID":"5e67aebe-8102-4767-9d4a-00c5e0317271","Type":"ContainerDied","Data":"55ec5d4f13ae3252bbac738285e62133b4dd2e06494655b8977d17833134daa5"} Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.426763 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7447f851-9eef-48b9-849e-ac7a51793472-catalog-content\") pod \"certified-operators-6jjlh\" (UID: \"7447f851-9eef-48b9-849e-ac7a51793472\") " pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.426848 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49j48\" (UniqueName: \"kubernetes.io/projected/7447f851-9eef-48b9-849e-ac7a51793472-kube-api-access-49j48\") pod \"certified-operators-6jjlh\" (UID: \"7447f851-9eef-48b9-849e-ac7a51793472\") " pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.426909 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7447f851-9eef-48b9-849e-ac7a51793472-utilities\") pod \"certified-operators-6jjlh\" (UID: \"7447f851-9eef-48b9-849e-ac7a51793472\") " pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.427380 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7447f851-9eef-48b9-849e-ac7a51793472-utilities\") pod \"certified-operators-6jjlh\" (UID: \"7447f851-9eef-48b9-849e-ac7a51793472\") " pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.427380 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7447f851-9eef-48b9-849e-ac7a51793472-catalog-content\") pod \"certified-operators-6jjlh\" (UID: \"7447f851-9eef-48b9-849e-ac7a51793472\") " pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.449141 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49j48\" (UniqueName: \"kubernetes.io/projected/7447f851-9eef-48b9-849e-ac7a51793472-kube-api-access-49j48\") pod \"certified-operators-6jjlh\" (UID: \"7447f851-9eef-48b9-849e-ac7a51793472\") " pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:12 crc kubenswrapper[4768]: I1124 18:03:12.586265 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:13 crc kubenswrapper[4768]: I1124 18:03:13.077730 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6jjlh"] Nov 24 18:03:13 crc kubenswrapper[4768]: W1124 18:03:13.081790 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7447f851_9eef_48b9_849e_ac7a51793472.slice/crio-7919abb051c4dc27bc99bfdb9b32d591570220fe9cd9361dbde4b7dd61863203 WatchSource:0}: Error finding container 7919abb051c4dc27bc99bfdb9b32d591570220fe9cd9361dbde4b7dd61863203: Status 404 returned error can't find the container with id 7919abb051c4dc27bc99bfdb9b32d591570220fe9cd9361dbde4b7dd61863203 Nov 24 18:03:13 crc kubenswrapper[4768]: I1124 18:03:13.421585 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlt7" event={"ID":"5e67aebe-8102-4767-9d4a-00c5e0317271","Type":"ContainerStarted","Data":"c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3"} Nov 24 18:03:13 crc kubenswrapper[4768]: I1124 18:03:13.422920 4768 generic.go:334] "Generic (PLEG): container finished" podID="7447f851-9eef-48b9-849e-ac7a51793472" containerID="19eeecb5e8ab8b10b2267db036c6bacdd34b12001c0db227a4d8317d88b408a0" exitCode=0 Nov 24 18:03:13 crc kubenswrapper[4768]: I1124 18:03:13.422959 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jjlh" event={"ID":"7447f851-9eef-48b9-849e-ac7a51793472","Type":"ContainerDied","Data":"19eeecb5e8ab8b10b2267db036c6bacdd34b12001c0db227a4d8317d88b408a0"} Nov 24 18:03:13 crc kubenswrapper[4768]: I1124 18:03:13.422986 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jjlh" event={"ID":"7447f851-9eef-48b9-849e-ac7a51793472","Type":"ContainerStarted","Data":"7919abb051c4dc27bc99bfdb9b32d591570220fe9cd9361dbde4b7dd61863203"} Nov 24 18:03:13 crc kubenswrapper[4768]: I1124 18:03:13.449994 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dzlt7" podStartSLOduration=2.9725477700000003 podStartE2EDuration="5.449975614s" podCreationTimestamp="2025-11-24 18:03:08 +0000 UTC" firstStartedPulling="2025-11-24 18:03:10.400645117 +0000 UTC m=+829.261226894" lastFinishedPulling="2025-11-24 18:03:12.878072961 +0000 UTC m=+831.738654738" observedRunningTime="2025-11-24 18:03:13.444476622 +0000 UTC m=+832.305058409" watchObservedRunningTime="2025-11-24 18:03:13.449975614 +0000 UTC m=+832.310557391" Nov 24 18:03:14 crc kubenswrapper[4768]: I1124 18:03:14.433410 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jjlh" event={"ID":"7447f851-9eef-48b9-849e-ac7a51793472","Type":"ContainerStarted","Data":"557e217321498a3b8c3e9981d64ac2c67bb7d03f6d1cfbe23079f57aea81e307"} Nov 24 18:03:15 crc kubenswrapper[4768]: I1124 18:03:15.439731 4768 generic.go:334] "Generic (PLEG): container finished" podID="7447f851-9eef-48b9-849e-ac7a51793472" containerID="557e217321498a3b8c3e9981d64ac2c67bb7d03f6d1cfbe23079f57aea81e307" exitCode=0 Nov 24 18:03:15 crc kubenswrapper[4768]: I1124 18:03:15.439778 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jjlh" event={"ID":"7447f851-9eef-48b9-849e-ac7a51793472","Type":"ContainerDied","Data":"557e217321498a3b8c3e9981d64ac2c67bb7d03f6d1cfbe23079f57aea81e307"} Nov 24 18:03:18 crc kubenswrapper[4768]: I1124 18:03:18.457449 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jjlh" event={"ID":"7447f851-9eef-48b9-849e-ac7a51793472","Type":"ContainerStarted","Data":"b600176374993e364a035c880a47d3a7a62d9306df0f7d4e6af352aa8710677a"} Nov 24 18:03:18 crc kubenswrapper[4768]: I1124 18:03:18.476758 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6jjlh" podStartSLOduration=4.009505928 podStartE2EDuration="6.476741454s" podCreationTimestamp="2025-11-24 18:03:12 +0000 UTC" firstStartedPulling="2025-11-24 18:03:13.424272665 +0000 UTC m=+832.284854442" lastFinishedPulling="2025-11-24 18:03:15.891508191 +0000 UTC m=+834.752089968" observedRunningTime="2025-11-24 18:03:18.475565472 +0000 UTC m=+837.336147249" watchObservedRunningTime="2025-11-24 18:03:18.476741454 +0000 UTC m=+837.337323231" Nov 24 18:03:18 crc kubenswrapper[4768]: I1124 18:03:18.805573 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:18 crc kubenswrapper[4768]: I1124 18:03:18.805633 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:18 crc kubenswrapper[4768]: I1124 18:03:18.851220 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.030594 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.034168 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.039986 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-5t9md" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.082545 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.089641 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.096716 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-mrgz4" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.102364 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.124553 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.133758 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.134279 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnvqb\" (UniqueName: \"kubernetes.io/projected/ab197189-f8ba-4b06-b62a-73dd90994a39-kube-api-access-qnvqb\") pod \"cinder-operator-controller-manager-79856dc55c-nx9kk\" (UID: \"ab197189-f8ba-4b06-b62a-73dd90994a39\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.134333 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7fkb\" (UniqueName: \"kubernetes.io/projected/c6d746c7-cf41-4ebd-95ba-e23836f6e5d4-kube-api-access-k7fkb\") pod \"barbican-operator-controller-manager-86dc4d89c8-wtd7r\" (UID: \"c6d746c7-cf41-4ebd-95ba-e23836f6e5d4\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.134914 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.139907 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-zgl7f" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.144212 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.171636 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.172913 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.174327 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.176897 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sxfvp" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.186028 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.186971 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.192377 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-tfdc6" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.203305 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.227460 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.228589 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.235351 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.236310 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8fgk\" (UniqueName: \"kubernetes.io/projected/28171867-a10a-4f0c-840d-ce55038bcd93-kube-api-access-t8fgk\") pod \"glance-operator-controller-manager-69fbff6fff-t2zl8\" (UID: \"28171867-a10a-4f0c-840d-ce55038bcd93\") " pod="openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.236376 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxxqb\" (UniqueName: \"kubernetes.io/projected/afa155f0-dde8-4d99-a454-527207b3189c-kube-api-access-qxxqb\") pod \"heat-operator-controller-manager-774b86978c-xw2jj\" (UID: \"afa155f0-dde8-4d99-a454-527207b3189c\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.236410 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnsb9\" (UniqueName: \"kubernetes.io/projected/52de35ae-ab63-4e1b-88d1-e42033ee56b7-kube-api-access-jnsb9\") pod \"designate-operator-controller-manager-7d695c9b56-jg4mn\" (UID: \"52de35ae-ab63-4e1b-88d1-e42033ee56b7\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.236437 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnvqb\" (UniqueName: \"kubernetes.io/projected/ab197189-f8ba-4b06-b62a-73dd90994a39-kube-api-access-qnvqb\") pod \"cinder-operator-controller-manager-79856dc55c-nx9kk\" (UID: \"ab197189-f8ba-4b06-b62a-73dd90994a39\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.236456 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7fkb\" (UniqueName: \"kubernetes.io/projected/c6d746c7-cf41-4ebd-95ba-e23836f6e5d4-kube-api-access-k7fkb\") pod \"barbican-operator-controller-manager-86dc4d89c8-wtd7r\" (UID: \"c6d746c7-cf41-4ebd-95ba-e23836f6e5d4\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.247967 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-h8ff6" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.256621 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.258349 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.267177 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-78fcc" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.267342 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.270550 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.271674 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.277519 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-lt8d6" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.283518 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7fkb\" (UniqueName: \"kubernetes.io/projected/c6d746c7-cf41-4ebd-95ba-e23836f6e5d4-kube-api-access-k7fkb\") pod \"barbican-operator-controller-manager-86dc4d89c8-wtd7r\" (UID: \"c6d746c7-cf41-4ebd-95ba-e23836f6e5d4\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.287709 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.287957 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnvqb\" (UniqueName: \"kubernetes.io/projected/ab197189-f8ba-4b06-b62a-73dd90994a39-kube-api-access-qnvqb\") pod \"cinder-operator-controller-manager-79856dc55c-nx9kk\" (UID: \"ab197189-f8ba-4b06-b62a-73dd90994a39\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.300185 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.305414 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.306556 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.311830 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jmdf5" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.314139 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.315157 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.316829 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-7cb82" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.340092 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxxqb\" (UniqueName: \"kubernetes.io/projected/afa155f0-dde8-4d99-a454-527207b3189c-kube-api-access-qxxqb\") pod \"heat-operator-controller-manager-774b86978c-xw2jj\" (UID: \"afa155f0-dde8-4d99-a454-527207b3189c\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.340147 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bpdq\" (UniqueName: \"kubernetes.io/projected/34b164fd-5d2f-4c00-83dc-ad8a90f4b94c-kube-api-access-9bpdq\") pod \"horizon-operator-controller-manager-68c9694994-k5fkx\" (UID: \"34b164fd-5d2f-4c00-83dc-ad8a90f4b94c\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.340169 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2spts\" (UniqueName: \"kubernetes.io/projected/b44a0f95-c792-4375-9292-34a95608c64f-kube-api-access-2spts\") pod \"infra-operator-controller-manager-858778c9dc-2wljz\" (UID: \"b44a0f95-c792-4375-9292-34a95608c64f\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.340200 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnsb9\" (UniqueName: \"kubernetes.io/projected/52de35ae-ab63-4e1b-88d1-e42033ee56b7-kube-api-access-jnsb9\") pod \"designate-operator-controller-manager-7d695c9b56-jg4mn\" (UID: \"52de35ae-ab63-4e1b-88d1-e42033ee56b7\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.340227 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b44a0f95-c792-4375-9292-34a95608c64f-cert\") pod \"infra-operator-controller-manager-858778c9dc-2wljz\" (UID: \"b44a0f95-c792-4375-9292-34a95608c64f\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.340280 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8fgk\" (UniqueName: \"kubernetes.io/projected/28171867-a10a-4f0c-840d-ce55038bcd93-kube-api-access-t8fgk\") pod \"glance-operator-controller-manager-69fbff6fff-t2zl8\" (UID: \"28171867-a10a-4f0c-840d-ce55038bcd93\") " pod="openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.340308 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv7zd\" (UniqueName: \"kubernetes.io/projected/ab3b5e40-6284-45cb-822e-a9490b1794c5-kube-api-access-nv7zd\") pod \"ironic-operator-controller-manager-5bfcdc958c-m6skf\" (UID: \"ab3b5e40-6284-45cb-822e-a9490b1794c5\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.347529 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.368424 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.370211 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxxqb\" (UniqueName: \"kubernetes.io/projected/afa155f0-dde8-4d99-a454-527207b3189c-kube-api-access-qxxqb\") pod \"heat-operator-controller-manager-774b86978c-xw2jj\" (UID: \"afa155f0-dde8-4d99-a454-527207b3189c\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.371925 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8fgk\" (UniqueName: \"kubernetes.io/projected/28171867-a10a-4f0c-840d-ce55038bcd93-kube-api-access-t8fgk\") pod \"glance-operator-controller-manager-69fbff6fff-t2zl8\" (UID: \"28171867-a10a-4f0c-840d-ce55038bcd93\") " pod="openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.389625 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.390938 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.392773 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.393422 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnsb9\" (UniqueName: \"kubernetes.io/projected/52de35ae-ab63-4e1b-88d1-e42033ee56b7-kube-api-access-jnsb9\") pod \"designate-operator-controller-manager-7d695c9b56-jg4mn\" (UID: \"52de35ae-ab63-4e1b-88d1-e42033ee56b7\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.396369 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rvb6t" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.408551 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.409585 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.412953 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-jjvdb" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.418874 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.429249 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.444347 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bpdq\" (UniqueName: \"kubernetes.io/projected/34b164fd-5d2f-4c00-83dc-ad8a90f4b94c-kube-api-access-9bpdq\") pod \"horizon-operator-controller-manager-68c9694994-k5fkx\" (UID: \"34b164fd-5d2f-4c00-83dc-ad8a90f4b94c\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.444452 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2spts\" (UniqueName: \"kubernetes.io/projected/b44a0f95-c792-4375-9292-34a95608c64f-kube-api-access-2spts\") pod \"infra-operator-controller-manager-858778c9dc-2wljz\" (UID: \"b44a0f95-c792-4375-9292-34a95608c64f\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.444558 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7nqm\" (UniqueName: \"kubernetes.io/projected/2c04229f-5a27-4477-816d-60d5f1977144-kube-api-access-g7nqm\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-846gl\" (UID: \"2c04229f-5a27-4477-816d-60d5f1977144\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.444596 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj7lq\" (UniqueName: \"kubernetes.io/projected/8d92c413-b62d-4896-ae13-1ee9608aa65a-kube-api-access-jj7lq\") pod \"manila-operator-controller-manager-58bb8d67cc-b6vk2\" (UID: \"8d92c413-b62d-4896-ae13-1ee9608aa65a\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.444648 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b44a0f95-c792-4375-9292-34a95608c64f-cert\") pod \"infra-operator-controller-manager-858778c9dc-2wljz\" (UID: \"b44a0f95-c792-4375-9292-34a95608c64f\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.444776 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhmsl\" (UniqueName: \"kubernetes.io/projected/8d6fc3b4-896a-4480-9371-930a2882151e-kube-api-access-jhmsl\") pod \"keystone-operator-controller-manager-748dc6576f-5sprh\" (UID: \"8d6fc3b4-896a-4480-9371-930a2882151e\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.444806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv7zd\" (UniqueName: \"kubernetes.io/projected/ab3b5e40-6284-45cb-822e-a9490b1794c5-kube-api-access-nv7zd\") pod \"ironic-operator-controller-manager-5bfcdc958c-m6skf\" (UID: \"ab3b5e40-6284-45cb-822e-a9490b1794c5\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.454553 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.456981 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b44a0f95-c792-4375-9292-34a95608c64f-cert\") pod \"infra-operator-controller-manager-858778c9dc-2wljz\" (UID: \"b44a0f95-c792-4375-9292-34a95608c64f\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.463753 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.464878 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.467440 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.472302 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ccr9t" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.479933 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.481021 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.496703 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-8s2tl" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.498293 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv7zd\" (UniqueName: \"kubernetes.io/projected/ab3b5e40-6284-45cb-822e-a9490b1794c5-kube-api-access-nv7zd\") pod \"ironic-operator-controller-manager-5bfcdc958c-m6skf\" (UID: \"ab3b5e40-6284-45cb-822e-a9490b1794c5\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.507562 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.517501 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2spts\" (UniqueName: \"kubernetes.io/projected/b44a0f95-c792-4375-9292-34a95608c64f-kube-api-access-2spts\") pod \"infra-operator-controller-manager-858778c9dc-2wljz\" (UID: \"b44a0f95-c792-4375-9292-34a95608c64f\") " pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.517756 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bpdq\" (UniqueName: \"kubernetes.io/projected/34b164fd-5d2f-4c00-83dc-ad8a90f4b94c-kube-api-access-9bpdq\") pod \"horizon-operator-controller-manager-68c9694994-k5fkx\" (UID: \"34b164fd-5d2f-4c00-83dc-ad8a90f4b94c\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.521281 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.548650 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl2jq\" (UniqueName: \"kubernetes.io/projected/583db3d6-5f9c-4ce1-8214-06963fe50f96-kube-api-access-bl2jq\") pod \"nova-operator-controller-manager-79556f57fc-4mqdl\" (UID: \"583db3d6-5f9c-4ce1-8214-06963fe50f96\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.548756 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhmsl\" (UniqueName: \"kubernetes.io/projected/8d6fc3b4-896a-4480-9371-930a2882151e-kube-api-access-jhmsl\") pod \"keystone-operator-controller-manager-748dc6576f-5sprh\" (UID: \"8d6fc3b4-896a-4480-9371-930a2882151e\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.548858 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv4h9\" (UniqueName: \"kubernetes.io/projected/7a599ec7-7361-4e08-8d81-3cfc208d41b5-kube-api-access-mv4h9\") pod \"neutron-operator-controller-manager-7c57c8bbc4-hdfsr\" (UID: \"7a599ec7-7361-4e08-8d81-3cfc208d41b5\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.548898 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7nqm\" (UniqueName: \"kubernetes.io/projected/2c04229f-5a27-4477-816d-60d5f1977144-kube-api-access-g7nqm\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-846gl\" (UID: \"2c04229f-5a27-4477-816d-60d5f1977144\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.548933 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj7lq\" (UniqueName: \"kubernetes.io/projected/8d92c413-b62d-4896-ae13-1ee9608aa65a-kube-api-access-jj7lq\") pod \"manila-operator-controller-manager-58bb8d67cc-b6vk2\" (UID: \"8d92c413-b62d-4896-ae13-1ee9608aa65a\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.548981 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8glc\" (UniqueName: \"kubernetes.io/projected/29ac0137-f29a-4a1f-8435-f4ec688a5948-kube-api-access-l8glc\") pod \"octavia-operator-controller-manager-fd75fd47d-f95nv\" (UID: \"29ac0137-f29a-4a1f-8435-f4ec688a5948\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.568252 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.573243 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7nqm\" (UniqueName: \"kubernetes.io/projected/2c04229f-5a27-4477-816d-60d5f1977144-kube-api-access-g7nqm\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-846gl\" (UID: \"2c04229f-5a27-4477-816d-60d5f1977144\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.575841 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.579354 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj7lq\" (UniqueName: \"kubernetes.io/projected/8d92c413-b62d-4896-ae13-1ee9608aa65a-kube-api-access-jj7lq\") pod \"manila-operator-controller-manager-58bb8d67cc-b6vk2\" (UID: \"8d92c413-b62d-4896-ae13-1ee9608aa65a\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.579900 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhmsl\" (UniqueName: \"kubernetes.io/projected/8d6fc3b4-896a-4480-9371-930a2882151e-kube-api-access-jhmsl\") pod \"keystone-operator-controller-manager-748dc6576f-5sprh\" (UID: \"8d6fc3b4-896a-4480-9371-930a2882151e\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.585584 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.608982 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.642010 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.650746 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv4h9\" (UniqueName: \"kubernetes.io/projected/7a599ec7-7361-4e08-8d81-3cfc208d41b5-kube-api-access-mv4h9\") pod \"neutron-operator-controller-manager-7c57c8bbc4-hdfsr\" (UID: \"7a599ec7-7361-4e08-8d81-3cfc208d41b5\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.650815 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8glc\" (UniqueName: \"kubernetes.io/projected/29ac0137-f29a-4a1f-8435-f4ec688a5948-kube-api-access-l8glc\") pod \"octavia-operator-controller-manager-fd75fd47d-f95nv\" (UID: \"29ac0137-f29a-4a1f-8435-f4ec688a5948\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.650847 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl2jq\" (UniqueName: \"kubernetes.io/projected/583db3d6-5f9c-4ce1-8214-06963fe50f96-kube-api-access-bl2jq\") pod \"nova-operator-controller-manager-79556f57fc-4mqdl\" (UID: \"583db3d6-5f9c-4ce1-8214-06963fe50f96\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.651870 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.656845 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.657954 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.663661 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-b24mf" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.666716 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.667913 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.676113 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.695984 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xshn5" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.696334 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.705291 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.706108 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.709861 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9d47b" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.710956 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv4h9\" (UniqueName: \"kubernetes.io/projected/7a599ec7-7361-4e08-8d81-3cfc208d41b5-kube-api-access-mv4h9\") pod \"neutron-operator-controller-manager-7c57c8bbc4-hdfsr\" (UID: \"7a599ec7-7361-4e08-8d81-3cfc208d41b5\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.714572 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl2jq\" (UniqueName: \"kubernetes.io/projected/583db3d6-5f9c-4ce1-8214-06963fe50f96-kube-api-access-bl2jq\") pod \"nova-operator-controller-manager-79556f57fc-4mqdl\" (UID: \"583db3d6-5f9c-4ce1-8214-06963fe50f96\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.720020 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.728735 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.729519 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.731980 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8glc\" (UniqueName: \"kubernetes.io/projected/29ac0137-f29a-4a1f-8435-f4ec688a5948-kube-api-access-l8glc\") pod \"octavia-operator-controller-manager-fd75fd47d-f95nv\" (UID: \"29ac0137-f29a-4a1f-8435-f4ec688a5948\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.732229 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-mqlfq" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.742076 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.752181 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.756378 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4djs\" (UniqueName: \"kubernetes.io/projected/d54c925d-91d6-4bb8-acff-623c4f213352-kube-api-access-s4djs\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lv927\" (UID: \"d54c925d-91d6-4bb8-acff-623c4f213352\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.756427 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d54c925d-91d6-4bb8-acff-623c4f213352-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lv927\" (UID: \"d54c925d-91d6-4bb8-acff-623c4f213352\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.756514 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkn8k\" (UniqueName: \"kubernetes.io/projected/78e75462-3120-4d07-a571-56727914e173-kube-api-access-pkn8k\") pod \"placement-operator-controller-manager-5db546f9d9-2t64b\" (UID: \"78e75462-3120-4d07-a571-56727914e173\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.756835 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xt6b\" (UniqueName: \"kubernetes.io/projected/8fe91de1-efe8-43e5-8b29-89043d06e880-kube-api-access-5xt6b\") pod \"swift-operator-controller-manager-6fdc4fcf86-4dwgz\" (UID: \"8fe91de1-efe8-43e5-8b29-89043d06e880\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.756932 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdtn\" (UniqueName: \"kubernetes.io/projected/0f74f3df-ed63-4105-882e-c3122177da3a-kube-api-access-7zdtn\") pod \"ovn-operator-controller-manager-66cf5c67ff-fz64p\" (UID: \"0f74f3df-ed63-4105-882e-c3122177da3a\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.771466 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.772735 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.784396 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-gvm6f" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.799357 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.812541 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.821786 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.830845 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.838523 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.856595 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.857715 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.858717 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmwt7\" (UniqueName: \"kubernetes.io/projected/4d4b069e-80e6-409b-aeee-130ac4351f32-kube-api-access-mmwt7\") pod \"telemetry-operator-controller-manager-567f98c9d-lfbgz\" (UID: \"4d4b069e-80e6-409b-aeee-130ac4351f32\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.858754 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xt6b\" (UniqueName: \"kubernetes.io/projected/8fe91de1-efe8-43e5-8b29-89043d06e880-kube-api-access-5xt6b\") pod \"swift-operator-controller-manager-6fdc4fcf86-4dwgz\" (UID: \"8fe91de1-efe8-43e5-8b29-89043d06e880\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.858785 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdtn\" (UniqueName: \"kubernetes.io/projected/0f74f3df-ed63-4105-882e-c3122177da3a-kube-api-access-7zdtn\") pod \"ovn-operator-controller-manager-66cf5c67ff-fz64p\" (UID: \"0f74f3df-ed63-4105-882e-c3122177da3a\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.858810 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4djs\" (UniqueName: \"kubernetes.io/projected/d54c925d-91d6-4bb8-acff-623c4f213352-kube-api-access-s4djs\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lv927\" (UID: \"d54c925d-91d6-4bb8-acff-623c4f213352\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.858828 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d54c925d-91d6-4bb8-acff-623c4f213352-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lv927\" (UID: \"d54c925d-91d6-4bb8-acff-623c4f213352\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.858862 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkn8k\" (UniqueName: \"kubernetes.io/projected/78e75462-3120-4d07-a571-56727914e173-kube-api-access-pkn8k\") pod \"placement-operator-controller-manager-5db546f9d9-2t64b\" (UID: \"78e75462-3120-4d07-a571-56727914e173\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" Nov 24 18:03:19 crc kubenswrapper[4768]: E1124 18:03:19.859457 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 18:03:19 crc kubenswrapper[4768]: E1124 18:03:19.859506 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d54c925d-91d6-4bb8-acff-623c4f213352-cert podName:d54c925d-91d6-4bb8-acff-623c4f213352 nodeName:}" failed. No retries permitted until 2025-11-24 18:03:20.359478324 +0000 UTC m=+839.220060101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d54c925d-91d6-4bb8-acff-623c4f213352-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-lv927" (UID: "d54c925d-91d6-4bb8-acff-623c4f213352") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.868406 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.873975 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.874654 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nv9sr" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.891628 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-2264q"] Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.891879 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.893861 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.898476 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qlcn2" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.962201 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbxn6\" (UniqueName: \"kubernetes.io/projected/1f0a9442-916e-442d-bb0f-6060ba5915c8-kube-api-access-zbxn6\") pod \"test-operator-controller-manager-5cb74df96-d2hdv\" (UID: \"1f0a9442-916e-442d-bb0f-6060ba5915c8\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.962276 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmwt7\" (UniqueName: \"kubernetes.io/projected/4d4b069e-80e6-409b-aeee-130ac4351f32-kube-api-access-mmwt7\") pod \"telemetry-operator-controller-manager-567f98c9d-lfbgz\" (UID: \"4d4b069e-80e6-409b-aeee-130ac4351f32\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.962348 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crrqb\" (UniqueName: \"kubernetes.io/projected/c6d6eee2-6cb1-411d-837f-921b1c6c92fb-kube-api-access-crrqb\") pod \"watcher-operator-controller-manager-864885998-2264q\" (UID: \"c6d6eee2-6cb1-411d-837f-921b1c6c92fb\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" Nov 24 18:03:19 crc kubenswrapper[4768]: I1124 18:03:19.982135 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmwt7\" (UniqueName: \"kubernetes.io/projected/4d4b069e-80e6-409b-aeee-130ac4351f32-kube-api-access-mmwt7\") pod \"telemetry-operator-controller-manager-567f98c9d-lfbgz\" (UID: \"4d4b069e-80e6-409b-aeee-130ac4351f32\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.001926 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdtn\" (UniqueName: \"kubernetes.io/projected/0f74f3df-ed63-4105-882e-c3122177da3a-kube-api-access-7zdtn\") pod \"ovn-operator-controller-manager-66cf5c67ff-fz64p\" (UID: \"0f74f3df-ed63-4105-882e-c3122177da3a\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.013885 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-2264q"] Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.013925 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf"] Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.014745 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf"] Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.014824 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.017698 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.017948 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cm2l4" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.018966 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.029759 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k"] Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.030763 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.038414 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zrk76" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.049150 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k"] Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.067603 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxlw\" (UniqueName: \"kubernetes.io/projected/dfa124f2-a194-4cae-bfed-eb56288e56a6-kube-api-access-cjxlw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-csz8k\" (UID: \"dfa124f2-a194-4cae-bfed-eb56288e56a6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.067682 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbxn6\" (UniqueName: \"kubernetes.io/projected/1f0a9442-916e-442d-bb0f-6060ba5915c8-kube-api-access-zbxn6\") pod \"test-operator-controller-manager-5cb74df96-d2hdv\" (UID: \"1f0a9442-916e-442d-bb0f-6060ba5915c8\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.067801 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5fp6\" (UniqueName: \"kubernetes.io/projected/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-kube-api-access-q5fp6\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.067952 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crrqb\" (UniqueName: \"kubernetes.io/projected/c6d6eee2-6cb1-411d-837f-921b1c6c92fb-kube-api-access-crrqb\") pod \"watcher-operator-controller-manager-864885998-2264q\" (UID: \"c6d6eee2-6cb1-411d-837f-921b1c6c92fb\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.068065 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-metrics-certs\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.068106 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-webhook-certs\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.087012 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.089564 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4djs\" (UniqueName: \"kubernetes.io/projected/d54c925d-91d6-4bb8-acff-623c4f213352-kube-api-access-s4djs\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lv927\" (UID: \"d54c925d-91d6-4bb8-acff-623c4f213352\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.091570 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xt6b\" (UniqueName: \"kubernetes.io/projected/8fe91de1-efe8-43e5-8b29-89043d06e880-kube-api-access-5xt6b\") pod \"swift-operator-controller-manager-6fdc4fcf86-4dwgz\" (UID: \"8fe91de1-efe8-43e5-8b29-89043d06e880\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.100632 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crrqb\" (UniqueName: \"kubernetes.io/projected/c6d6eee2-6cb1-411d-837f-921b1c6c92fb-kube-api-access-crrqb\") pod \"watcher-operator-controller-manager-864885998-2264q\" (UID: \"c6d6eee2-6cb1-411d-837f-921b1c6c92fb\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.112523 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbxn6\" (UniqueName: \"kubernetes.io/projected/1f0a9442-916e-442d-bb0f-6060ba5915c8-kube-api-access-zbxn6\") pod \"test-operator-controller-manager-5cb74df96-d2hdv\" (UID: \"1f0a9442-916e-442d-bb0f-6060ba5915c8\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.121313 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkn8k\" (UniqueName: \"kubernetes.io/projected/78e75462-3120-4d07-a571-56727914e173-kube-api-access-pkn8k\") pod \"placement-operator-controller-manager-5db546f9d9-2t64b\" (UID: \"78e75462-3120-4d07-a571-56727914e173\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.127568 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.181548 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-metrics-certs\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.181586 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-webhook-certs\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.181644 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjxlw\" (UniqueName: \"kubernetes.io/projected/dfa124f2-a194-4cae-bfed-eb56288e56a6-kube-api-access-cjxlw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-csz8k\" (UID: \"dfa124f2-a194-4cae-bfed-eb56288e56a6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.181696 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5fp6\" (UniqueName: \"kubernetes.io/projected/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-kube-api-access-q5fp6\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:20 crc kubenswrapper[4768]: E1124 18:03:20.182069 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 18:03:20 crc kubenswrapper[4768]: E1124 18:03:20.182111 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-metrics-certs podName:ba241c62-4e0e-4e9b-bff9-4f590d0a1d28 nodeName:}" failed. No retries permitted until 2025-11-24 18:03:20.6820966 +0000 UTC m=+839.542678377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-metrics-certs") pod "openstack-operator-controller-manager-bdb766b46-6b4tf" (UID: "ba241c62-4e0e-4e9b-bff9-4f590d0a1d28") : secret "metrics-server-cert" not found Nov 24 18:03:20 crc kubenswrapper[4768]: E1124 18:03:20.182271 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 18:03:20 crc kubenswrapper[4768]: E1124 18:03:20.182302 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-webhook-certs podName:ba241c62-4e0e-4e9b-bff9-4f590d0a1d28 nodeName:}" failed. No retries permitted until 2025-11-24 18:03:20.682295835 +0000 UTC m=+839.542877612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-webhook-certs") pod "openstack-operator-controller-manager-bdb766b46-6b4tf" (UID: "ba241c62-4e0e-4e9b-bff9-4f590d0a1d28") : secret "webhook-server-cert" not found Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.208189 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5fp6\" (UniqueName: \"kubernetes.io/projected/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-kube-api-access-q5fp6\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.221143 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dzlt7"] Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.240471 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjxlw\" (UniqueName: \"kubernetes.io/projected/dfa124f2-a194-4cae-bfed-eb56288e56a6-kube-api-access-cjxlw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-csz8k\" (UID: \"dfa124f2-a194-4cae-bfed-eb56288e56a6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.261379 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.306088 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.338923 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.387954 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d54c925d-91d6-4bb8-acff-623c4f213352-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lv927\" (UID: \"d54c925d-91d6-4bb8-acff-623c4f213352\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" Nov 24 18:03:20 crc kubenswrapper[4768]: E1124 18:03:20.388369 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 18:03:20 crc kubenswrapper[4768]: E1124 18:03:20.388572 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d54c925d-91d6-4bb8-acff-623c4f213352-cert podName:d54c925d-91d6-4bb8-acff-623c4f213352 nodeName:}" failed. No retries permitted until 2025-11-24 18:03:21.388454766 +0000 UTC m=+840.249036713 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d54c925d-91d6-4bb8-acff-623c4f213352-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-lv927" (UID: "d54c925d-91d6-4bb8-acff-623c4f213352") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.392764 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.413926 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn"] Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.423213 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.453510 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk"] Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.533193 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" event={"ID":"52de35ae-ab63-4e1b-88d1-e42033ee56b7","Type":"ContainerStarted","Data":"dd24322a12f4df86f16a9901b88f4dfa30a56743697fbf6401428d8b48437e62"} Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.579185 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8"] Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.593412 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r"] Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.703381 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-metrics-certs\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:20 crc kubenswrapper[4768]: I1124 18:03:20.703507 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-webhook-certs\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:20 crc kubenswrapper[4768]: E1124 18:03:20.703563 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 18:03:20 crc kubenswrapper[4768]: E1124 18:03:20.703634 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 18:03:20 crc kubenswrapper[4768]: E1124 18:03:20.703663 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-metrics-certs podName:ba241c62-4e0e-4e9b-bff9-4f590d0a1d28 nodeName:}" failed. No retries permitted until 2025-11-24 18:03:21.703626966 +0000 UTC m=+840.564208743 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-metrics-certs") pod "openstack-operator-controller-manager-bdb766b46-6b4tf" (UID: "ba241c62-4e0e-4e9b-bff9-4f590d0a1d28") : secret "metrics-server-cert" not found Nov 24 18:03:20 crc kubenswrapper[4768]: E1124 18:03:20.703835 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-webhook-certs podName:ba241c62-4e0e-4e9b-bff9-4f590d0a1d28 nodeName:}" failed. No retries permitted until 2025-11-24 18:03:21.703819581 +0000 UTC m=+840.564401368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-webhook-certs") pod "openstack-operator-controller-manager-bdb766b46-6b4tf" (UID: "ba241c62-4e0e-4e9b-bff9-4f590d0a1d28") : secret "webhook-server-cert" not found Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.033698 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz"] Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.070369 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr"] Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.086321 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh"] Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.086389 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf"] Nov 24 18:03:21 crc kubenswrapper[4768]: W1124 18:03:21.089777 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab3b5e40_6284_45cb_822e_a9490b1794c5.slice/crio-78c1d9ae13dd50fc0058293e2997ce82571af418178e5caeb0263a334b39e425 WatchSource:0}: Error finding container 78c1d9ae13dd50fc0058293e2997ce82571af418178e5caeb0263a334b39e425: Status 404 returned error can't find the container with id 78c1d9ae13dd50fc0058293e2997ce82571af418178e5caeb0263a334b39e425 Nov 24 18:03:21 crc kubenswrapper[4768]: W1124 18:03:21.104849 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d6fc3b4_896a_4480_9371_930a2882151e.slice/crio-c5c138c2188e222d278f2e410c9a40e608e14d994594556f4f437a8e17a8f7f3 WatchSource:0}: Error finding container c5c138c2188e222d278f2e410c9a40e608e14d994594556f4f437a8e17a8f7f3: Status 404 returned error can't find the container with id c5c138c2188e222d278f2e410c9a40e608e14d994594556f4f437a8e17a8f7f3 Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.123635 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2"] Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.324380 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx"] Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.337671 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv"] Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.356872 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl"] Nov 24 18:03:21 crc kubenswrapper[4768]: W1124 18:03:21.400826 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c04229f_5a27_4477_816d_60d5f1977144.slice/crio-fcb243d9d007401513a8a78a9d2707c22c600bba111e65644da0adbb40a49aec WatchSource:0}: Error finding container fcb243d9d007401513a8a78a9d2707c22c600bba111e65644da0adbb40a49aec: Status 404 returned error can't find the container with id fcb243d9d007401513a8a78a9d2707c22c600bba111e65644da0adbb40a49aec Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.400904 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl"] Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.400977 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv"] Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.408685 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b"] Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.424142 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d54c925d-91d6-4bb8-acff-623c4f213352-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lv927\" (UID: \"d54c925d-91d6-4bb8-acff-623c4f213352\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.426935 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz"] Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.436903 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d54c925d-91d6-4bb8-acff-623c4f213352-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-lv927\" (UID: \"d54c925d-91d6-4bb8-acff-623c4f213352\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" Nov 24 18:03:21 crc kubenswrapper[4768]: W1124 18:03:21.440598 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fe91de1_efe8_43e5_8b29_89043d06e880.slice/crio-3dd47369aad01e7ab11ed3b17ec3037de1da075d39a578cd7f676348e0cb3f3b WatchSource:0}: Error finding container 3dd47369aad01e7ab11ed3b17ec3037de1da075d39a578cd7f676348e0cb3f3b: Status 404 returned error can't find the container with id 3dd47369aad01e7ab11ed3b17ec3037de1da075d39a578cd7f676348e0cb3f3b Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.444127 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bl2jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-4mqdl_openstack-operators(583db3d6-5f9c-4ce1-8214-06963fe50f96): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.446144 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-crrqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-2264q_openstack-operators(c6d6eee2-6cb1-411d-837f-921b1c6c92fb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.447545 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bl2jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-4mqdl_openstack-operators(583db3d6-5f9c-4ce1-8214-06963fe50f96): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.449519 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" podUID="583db3d6-5f9c-4ce1-8214-06963fe50f96" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.453589 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-crrqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-2264q_openstack-operators(c6d6eee2-6cb1-411d-837f-921b1c6c92fb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.455304 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" podUID="c6d6eee2-6cb1-411d-837f-921b1c6c92fb" Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.455360 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj"] Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.464010 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-2264q"] Nov 24 18:03:21 crc kubenswrapper[4768]: W1124 18:03:21.467563 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78e75462_3120_4d07_a571_56727914e173.slice/crio-933ded76c432feb1d1cb5c7c622645de441f677a17eb3efe9b334917724ebc5c WatchSource:0}: Error finding container 933ded76c432feb1d1cb5c7c622645de441f677a17eb3efe9b334917724ebc5c: Status 404 returned error can't find the container with id 933ded76c432feb1d1cb5c7c622645de441f677a17eb3efe9b334917724ebc5c Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.473467 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz"] Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.475325 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmwt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-lfbgz_openstack-operators(4d4b069e-80e6-409b-aeee-130ac4351f32): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.478997 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pkn8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-2t64b_openstack-operators(78e75462-3120-4d07-a571-56727914e173): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.493603 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pkn8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-2t64b_openstack-operators(78e75462-3120-4d07-a571-56727914e173): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.493697 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmwt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-lfbgz_openstack-operators(4d4b069e-80e6-409b-aeee-130ac4351f32): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.494791 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" podUID="78e75462-3120-4d07-a571-56727914e173" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.494836 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" podUID="4d4b069e-80e6-409b-aeee-130ac4351f32" Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.495654 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p"] Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.496925 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7zdtn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-fz64p_openstack-operators(0f74f3df-ed63-4105-882e-c3122177da3a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.506187 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7zdtn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-fz64p_openstack-operators(0f74f3df-ed63-4105-882e-c3122177da3a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.508095 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" podUID="0f74f3df-ed63-4105-882e-c3122177da3a" Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.512015 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k"] Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.527034 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" Nov 24 18:03:21 crc kubenswrapper[4768]: W1124 18:03:21.552080 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfa124f2_a194_4cae_bfed_eb56288e56a6.slice/crio-b25e20aa1f787b8849f4bf2edd92af868a8653709cde0eb0d0f7026bbaacdb68 WatchSource:0}: Error finding container b25e20aa1f787b8849f4bf2edd92af868a8653709cde0eb0d0f7026bbaacdb68: Status 404 returned error can't find the container with id b25e20aa1f787b8849f4bf2edd92af868a8653709cde0eb0d0f7026bbaacdb68 Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.555585 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj" event={"ID":"afa155f0-dde8-4d99-a454-527207b3189c","Type":"ContainerStarted","Data":"3fb6b99e0ca819d0244509c8c8dc629c2f3a4c919d37bdbc9188802d680dd114"} Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.557122 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv" event={"ID":"1f0a9442-916e-442d-bb0f-6060ba5915c8","Type":"ContainerStarted","Data":"b831ab407e0e04420806ab9ce57e4f616d2ec03b59036f2ca9776ec8532b68da"} Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.558868 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv" event={"ID":"29ac0137-f29a-4a1f-8435-f4ec688a5948","Type":"ContainerStarted","Data":"71669b7498df67810e081bcb1824b6d4906ad0c9f68f24b6a55622a826bbf755"} Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.559803 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr" event={"ID":"7a599ec7-7361-4e08-8d81-3cfc208d41b5","Type":"ContainerStarted","Data":"886d0c676c2eaebf55de41f85ca2ddf04ea9db11d1e71543ded5ee02544b3887"} Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.562331 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjxlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-csz8k_openstack-operators(dfa124f2-a194-4cae-bfed-eb56288e56a6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.562528 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8" event={"ID":"28171867-a10a-4f0c-840d-ce55038bcd93","Type":"ContainerStarted","Data":"3b6cc0833ccf8a77912ef2371aa4881aadef3cfd4df9bc48ddaf479d62beac0e"} Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.563447 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k" podUID="dfa124f2-a194-4cae-bfed-eb56288e56a6" Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.564837 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" event={"ID":"ab197189-f8ba-4b06-b62a-73dd90994a39","Type":"ContainerStarted","Data":"0be10577d76e288758e0d1249c0b0ad50f9f39b064cd484e537cc328d53c87b3"} Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.570089 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl" event={"ID":"2c04229f-5a27-4477-816d-60d5f1977144","Type":"ContainerStarted","Data":"fcb243d9d007401513a8a78a9d2707c22c600bba111e65644da0adbb40a49aec"} Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.578345 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" event={"ID":"8d92c413-b62d-4896-ae13-1ee9608aa65a","Type":"ContainerStarted","Data":"f0782edba9f069fa3996287ab9c1221a5317586132a1e9be0e402da366158adc"} Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.580712 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" event={"ID":"4d4b069e-80e6-409b-aeee-130ac4351f32","Type":"ContainerStarted","Data":"dfebce7aa94eec642fb691a0f8868a8ddd8931ba4c5a5507760e1d27b952584d"} Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.597640 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" podUID="4d4b069e-80e6-409b-aeee-130ac4351f32" Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.607587 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" event={"ID":"ab3b5e40-6284-45cb-822e-a9490b1794c5","Type":"ContainerStarted","Data":"78c1d9ae13dd50fc0058293e2997ce82571af418178e5caeb0263a334b39e425"} Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.610086 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh" event={"ID":"8d6fc3b4-896a-4480-9371-930a2882151e","Type":"ContainerStarted","Data":"c5c138c2188e222d278f2e410c9a40e608e14d994594556f4f437a8e17a8f7f3"} Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.613605 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r" event={"ID":"c6d746c7-cf41-4ebd-95ba-e23836f6e5d4","Type":"ContainerStarted","Data":"ab3a2a988fd41fad7c12fa5697efe5743688276514d09422499e0d4cad72528b"} Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.614979 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" event={"ID":"b44a0f95-c792-4375-9292-34a95608c64f","Type":"ContainerStarted","Data":"0eac15571bbe4b341904a756ae959966b5841b4f4459c3ebeed8621cb8dd79b1"} Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.617505 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" event={"ID":"583db3d6-5f9c-4ce1-8214-06963fe50f96","Type":"ContainerStarted","Data":"465802aa00a40389f91b8aa8946af0658eede8772ac9f93f9d13a648fe31c62c"} Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.622795 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" podUID="583db3d6-5f9c-4ce1-8214-06963fe50f96" Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.623758 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" event={"ID":"78e75462-3120-4d07-a571-56727914e173","Type":"ContainerStarted","Data":"933ded76c432feb1d1cb5c7c622645de441f677a17eb3efe9b334917724ebc5c"} Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.627055 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" podUID="78e75462-3120-4d07-a571-56727914e173" Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.628646 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" event={"ID":"c6d6eee2-6cb1-411d-837f-921b1c6c92fb","Type":"ContainerStarted","Data":"ce7f200ebaf05173b6d203071e6a9cf4ea327e9532c5edde6f53374678c4fa9b"} Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.631015 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" podUID="c6d6eee2-6cb1-411d-837f-921b1c6c92fb" Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.633601 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" event={"ID":"0f74f3df-ed63-4105-882e-c3122177da3a","Type":"ContainerStarted","Data":"28a4971d82d537d0157ba05a2ac150940506eef99cfc355d5e63b1bd68f1575c"} Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.646137 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" podUID="0f74f3df-ed63-4105-882e-c3122177da3a" Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.650198 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx" event={"ID":"34b164fd-5d2f-4c00-83dc-ad8a90f4b94c","Type":"ContainerStarted","Data":"270af79857d0cbf16e2f8dd5c019a30dadc41e497843e3c732dbdf3745f25e55"} Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.655716 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz" event={"ID":"8fe91de1-efe8-43e5-8b29-89043d06e880","Type":"ContainerStarted","Data":"3dd47369aad01e7ab11ed3b17ec3037de1da075d39a578cd7f676348e0cb3f3b"} Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.655753 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dzlt7" podUID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerName="registry-server" containerID="cri-o://c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3" gracePeriod=2 Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.742104 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-metrics-certs\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:21 crc kubenswrapper[4768]: I1124 18:03:21.742375 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-webhook-certs\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.742499 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.742547 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-webhook-certs podName:ba241c62-4e0e-4e9b-bff9-4f590d0a1d28 nodeName:}" failed. No retries permitted until 2025-11-24 18:03:23.742533764 +0000 UTC m=+842.603115541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-webhook-certs") pod "openstack-operator-controller-manager-bdb766b46-6b4tf" (UID: "ba241c62-4e0e-4e9b-bff9-4f590d0a1d28") : secret "webhook-server-cert" not found Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.742799 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 18:03:21 crc kubenswrapper[4768]: E1124 18:03:21.742826 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-metrics-certs podName:ba241c62-4e0e-4e9b-bff9-4f590d0a1d28 nodeName:}" failed. No retries permitted until 2025-11-24 18:03:23.742819252 +0000 UTC m=+842.603401029 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-metrics-certs") pod "openstack-operator-controller-manager-bdb766b46-6b4tf" (UID: "ba241c62-4e0e-4e9b-bff9-4f590d0a1d28") : secret "metrics-server-cert" not found Nov 24 18:03:22 crc kubenswrapper[4768]: I1124 18:03:22.125146 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927"] Nov 24 18:03:22 crc kubenswrapper[4768]: I1124 18:03:22.587331 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:22 crc kubenswrapper[4768]: I1124 18:03:22.587702 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:22 crc kubenswrapper[4768]: I1124 18:03:22.650359 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:22 crc kubenswrapper[4768]: I1124 18:03:22.667364 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k" event={"ID":"dfa124f2-a194-4cae-bfed-eb56288e56a6","Type":"ContainerStarted","Data":"b25e20aa1f787b8849f4bf2edd92af868a8653709cde0eb0d0f7026bbaacdb68"} Nov 24 18:03:22 crc kubenswrapper[4768]: I1124 18:03:22.670821 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" event={"ID":"d54c925d-91d6-4bb8-acff-623c4f213352","Type":"ContainerStarted","Data":"2f6a1e57f65a4c83f346ec1362d63ddb7a98db4bf2a38f0044d38710536b7fc3"} Nov 24 18:03:22 crc kubenswrapper[4768]: E1124 18:03:22.680021 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k" podUID="dfa124f2-a194-4cae-bfed-eb56288e56a6" Nov 24 18:03:22 crc kubenswrapper[4768]: E1124 18:03:22.689378 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" podUID="0f74f3df-ed63-4105-882e-c3122177da3a" Nov 24 18:03:22 crc kubenswrapper[4768]: E1124 18:03:22.689892 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" podUID="c6d6eee2-6cb1-411d-837f-921b1c6c92fb" Nov 24 18:03:22 crc kubenswrapper[4768]: E1124 18:03:22.690037 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" podUID="4d4b069e-80e6-409b-aeee-130ac4351f32" Nov 24 18:03:22 crc kubenswrapper[4768]: E1124 18:03:22.714068 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" podUID="78e75462-3120-4d07-a571-56727914e173" Nov 24 18:03:22 crc kubenswrapper[4768]: E1124 18:03:22.714180 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" podUID="583db3d6-5f9c-4ce1-8214-06963fe50f96" Nov 24 18:03:23 crc kubenswrapper[4768]: E1124 18:03:23.678369 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k" podUID="dfa124f2-a194-4cae-bfed-eb56288e56a6" Nov 24 18:03:23 crc kubenswrapper[4768]: I1124 18:03:23.791313 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-metrics-certs\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:23 crc kubenswrapper[4768]: I1124 18:03:23.791373 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-webhook-certs\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:23 crc kubenswrapper[4768]: I1124 18:03:23.800648 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-webhook-certs\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:23 crc kubenswrapper[4768]: I1124 18:03:23.801971 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba241c62-4e0e-4e9b-bff9-4f590d0a1d28-metrics-certs\") pod \"openstack-operator-controller-manager-bdb766b46-6b4tf\" (UID: \"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28\") " pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:23 crc kubenswrapper[4768]: I1124 18:03:23.881641 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cm2l4" Nov 24 18:03:23 crc kubenswrapper[4768]: I1124 18:03:23.888968 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:24 crc kubenswrapper[4768]: I1124 18:03:24.324725 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf"] Nov 24 18:03:24 crc kubenswrapper[4768]: I1124 18:03:24.693729 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" event={"ID":"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28","Type":"ContainerStarted","Data":"d204e28fea17642027ba89502634b1d3058b85caa1f80a93858db82caadf6444"} Nov 24 18:03:28 crc kubenswrapper[4768]: E1124 18:03:28.806171 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3 is running failed: container process not found" containerID="c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 18:03:28 crc kubenswrapper[4768]: E1124 18:03:28.807765 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3 is running failed: container process not found" containerID="c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 18:03:28 crc kubenswrapper[4768]: E1124 18:03:28.808344 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3 is running failed: container process not found" containerID="c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 18:03:28 crc kubenswrapper[4768]: E1124 18:03:28.808374 4768 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-dzlt7" podUID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerName="registry-server" Nov 24 18:03:30 crc kubenswrapper[4768]: I1124 18:03:30.444306 4768 generic.go:334] "Generic (PLEG): container finished" podID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerID="c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3" exitCode=0 Nov 24 18:03:30 crc kubenswrapper[4768]: I1124 18:03:30.444345 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlt7" event={"ID":"5e67aebe-8102-4767-9d4a-00c5e0317271","Type":"ContainerDied","Data":"c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3"} Nov 24 18:03:31 crc kubenswrapper[4768]: I1124 18:03:31.454233 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" event={"ID":"ba241c62-4e0e-4e9b-bff9-4f590d0a1d28","Type":"ContainerStarted","Data":"34e19bbc79131a0d4ee27a2572a88ca6063046268ad2d328988c873e946476bc"} Nov 24 18:03:31 crc kubenswrapper[4768]: I1124 18:03:31.455032 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:31 crc kubenswrapper[4768]: I1124 18:03:31.489260 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" podStartSLOduration=12.489231695 podStartE2EDuration="12.489231695s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:03:31.478883735 +0000 UTC m=+850.339465512" watchObservedRunningTime="2025-11-24 18:03:31.489231695 +0000 UTC m=+850.349813472" Nov 24 18:03:32 crc kubenswrapper[4768]: I1124 18:03:32.644096 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:32 crc kubenswrapper[4768]: I1124 18:03:32.685746 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6jjlh"] Nov 24 18:03:33 crc kubenswrapper[4768]: I1124 18:03:33.469533 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6jjlh" podUID="7447f851-9eef-48b9-849e-ac7a51793472" containerName="registry-server" containerID="cri-o://b600176374993e364a035c880a47d3a7a62d9306df0f7d4e6af352aa8710677a" gracePeriod=2 Nov 24 18:03:34 crc kubenswrapper[4768]: I1124 18:03:34.492045 4768 generic.go:334] "Generic (PLEG): container finished" podID="7447f851-9eef-48b9-849e-ac7a51793472" containerID="b600176374993e364a035c880a47d3a7a62d9306df0f7d4e6af352aa8710677a" exitCode=0 Nov 24 18:03:34 crc kubenswrapper[4768]: I1124 18:03:34.492137 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jjlh" event={"ID":"7447f851-9eef-48b9-849e-ac7a51793472","Type":"ContainerDied","Data":"b600176374993e364a035c880a47d3a7a62d9306df0f7d4e6af352aa8710677a"} Nov 24 18:03:38 crc kubenswrapper[4768]: E1124 18:03:38.806422 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3 is running failed: container process not found" containerID="c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 18:03:38 crc kubenswrapper[4768]: E1124 18:03:38.807307 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3 is running failed: container process not found" containerID="c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 18:03:38 crc kubenswrapper[4768]: E1124 18:03:38.808113 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3 is running failed: container process not found" containerID="c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 18:03:38 crc kubenswrapper[4768]: E1124 18:03:38.808191 4768 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-dzlt7" podUID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerName="registry-server" Nov 24 18:03:39 crc kubenswrapper[4768]: E1124 18:03:39.666726 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.83:5001/openstack-k8s-operators/glance-operator:f20c979df47e00e045ad52f68407373204606afb" Nov 24 18:03:39 crc kubenswrapper[4768]: E1124 18:03:39.666781 4768 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.83:5001/openstack-k8s-operators/glance-operator:f20c979df47e00e045ad52f68407373204606afb" Nov 24 18:03:39 crc kubenswrapper[4768]: E1124 18:03:39.666910 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.83:5001/openstack-k8s-operators/glance-operator:f20c979df47e00e045ad52f68407373204606afb,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t8fgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-69fbff6fff-t2zl8_openstack-operators(28171867-a10a-4f0c-840d-ce55038bcd93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:39 crc kubenswrapper[4768]: E1124 18:03:39.912979 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04" Nov 24 18:03:39 crc kubenswrapper[4768]: E1124 18:03:39.913318 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g7nqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-846gl_openstack-operators(2c04229f-5a27-4477-816d-60d5f1977144): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:40 crc kubenswrapper[4768]: E1124 18:03:40.904265 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13" Nov 24 18:03:40 crc kubenswrapper[4768]: E1124 18:03:40.904764 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8glc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-f95nv_openstack-operators(29ac0137-f29a-4a1f-8435-f4ec688a5948): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:42 crc kubenswrapper[4768]: E1124 18:03:42.588042 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b600176374993e364a035c880a47d3a7a62d9306df0f7d4e6af352aa8710677a is running failed: container process not found" containerID="b600176374993e364a035c880a47d3a7a62d9306df0f7d4e6af352aa8710677a" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 18:03:42 crc kubenswrapper[4768]: E1124 18:03:42.589368 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b600176374993e364a035c880a47d3a7a62d9306df0f7d4e6af352aa8710677a is running failed: container process not found" containerID="b600176374993e364a035c880a47d3a7a62d9306df0f7d4e6af352aa8710677a" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 18:03:42 crc kubenswrapper[4768]: E1124 18:03:42.589789 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b600176374993e364a035c880a47d3a7a62d9306df0f7d4e6af352aa8710677a is running failed: container process not found" containerID="b600176374993e364a035c880a47d3a7a62d9306df0f7d4e6af352aa8710677a" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 18:03:42 crc kubenswrapper[4768]: E1124 18:03:42.589901 4768 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b600176374993e364a035c880a47d3a7a62d9306df0f7d4e6af352aa8710677a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6jjlh" podUID="7447f851-9eef-48b9-849e-ac7a51793472" containerName="registry-server" Nov 24 18:03:42 crc kubenswrapper[4768]: E1124 18:03:42.978734 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9" Nov 24 18:03:42 crc kubenswrapper[4768]: E1124 18:03:42.979168 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bpdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c9694994-k5fkx_openstack-operators(34b164fd-5d2f-4c00-83dc-ad8a90f4b94c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:43 crc kubenswrapper[4768]: E1124 18:03:43.472231 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0" Nov 24 18:03:43 crc kubenswrapper[4768]: E1124 18:03:43.472428 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5xt6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-4dwgz_openstack-operators(8fe91de1-efe8-43e5-8b29-89043d06e880): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:43 crc kubenswrapper[4768]: I1124 18:03:43.894859 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-bdb766b46-6b4tf" Nov 24 18:03:43 crc kubenswrapper[4768]: E1124 18:03:43.898982 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:70cce55bcf89468c5d468ca2fc317bfc3dc5f2bef1c502df9faca2eb1293ede7" Nov 24 18:03:43 crc kubenswrapper[4768]: E1124 18:03:43.899227 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:70cce55bcf89468c5d468ca2fc317bfc3dc5f2bef1c502df9faca2eb1293ede7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k7fkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-86dc4d89c8-wtd7r_openstack-operators(c6d746c7-cf41-4ebd-95ba-e23836f6e5d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:44 crc kubenswrapper[4768]: E1124 18:03:44.420056 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d" Nov 24 18:03:44 crc kubenswrapper[4768]: E1124 18:03:44.420281 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zbxn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-d2hdv_openstack-operators(1f0a9442-916e-442d-bb0f-6060ba5915c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:44 crc kubenswrapper[4768]: E1124 18:03:44.859508 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96" Nov 24 18:03:44 crc kubenswrapper[4768]: E1124 18:03:44.859710 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxxqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-774b86978c-xw2jj_openstack-operators(afa155f0-dde8-4d99-a454-527207b3189c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:45 crc kubenswrapper[4768]: E1124 18:03:45.300019 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6" Nov 24 18:03:45 crc kubenswrapper[4768]: E1124 18:03:45.300383 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mv4h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-hdfsr_openstack-operators(7a599ec7-7361-4e08-8d81-3cfc208d41b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:45 crc kubenswrapper[4768]: E1124 18:03:45.793111 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:553b1288b330ad05771d59c6b73c1681c95f457e8475682f9ad0d2e6b85f37e9" Nov 24 18:03:45 crc kubenswrapper[4768]: E1124 18:03:45.793862 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:553b1288b330ad05771d59c6b73c1681c95f457e8475682f9ad0d2e6b85f37e9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qnvqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-79856dc55c-nx9kk_openstack-operators(ab197189-f8ba-4b06-b62a-73dd90994a39): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:45 crc kubenswrapper[4768]: I1124 18:03:45.846981 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:45 crc kubenswrapper[4768]: I1124 18:03:45.931351 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e67aebe-8102-4767-9d4a-00c5e0317271-catalog-content\") pod \"5e67aebe-8102-4767-9d4a-00c5e0317271\" (UID: \"5e67aebe-8102-4767-9d4a-00c5e0317271\") " Nov 24 18:03:45 crc kubenswrapper[4768]: I1124 18:03:45.931447 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e67aebe-8102-4767-9d4a-00c5e0317271-utilities\") pod \"5e67aebe-8102-4767-9d4a-00c5e0317271\" (UID: \"5e67aebe-8102-4767-9d4a-00c5e0317271\") " Nov 24 18:03:45 crc kubenswrapper[4768]: I1124 18:03:45.931607 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs9nn\" (UniqueName: \"kubernetes.io/projected/5e67aebe-8102-4767-9d4a-00c5e0317271-kube-api-access-qs9nn\") pod \"5e67aebe-8102-4767-9d4a-00c5e0317271\" (UID: \"5e67aebe-8102-4767-9d4a-00c5e0317271\") " Nov 24 18:03:45 crc kubenswrapper[4768]: I1124 18:03:45.935275 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e67aebe-8102-4767-9d4a-00c5e0317271-utilities" (OuterVolumeSpecName: "utilities") pod "5e67aebe-8102-4767-9d4a-00c5e0317271" (UID: "5e67aebe-8102-4767-9d4a-00c5e0317271"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:03:45 crc kubenswrapper[4768]: I1124 18:03:45.938161 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e67aebe-8102-4767-9d4a-00c5e0317271-kube-api-access-qs9nn" (OuterVolumeSpecName: "kube-api-access-qs9nn") pod "5e67aebe-8102-4767-9d4a-00c5e0317271" (UID: "5e67aebe-8102-4767-9d4a-00c5e0317271"). InnerVolumeSpecName "kube-api-access-qs9nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:03:46 crc kubenswrapper[4768]: I1124 18:03:46.026519 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e67aebe-8102-4767-9d4a-00c5e0317271-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e67aebe-8102-4767-9d4a-00c5e0317271" (UID: "5e67aebe-8102-4767-9d4a-00c5e0317271"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:03:46 crc kubenswrapper[4768]: I1124 18:03:46.033734 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs9nn\" (UniqueName: \"kubernetes.io/projected/5e67aebe-8102-4767-9d4a-00c5e0317271-kube-api-access-qs9nn\") on node \"crc\" DevicePath \"\"" Nov 24 18:03:46 crc kubenswrapper[4768]: I1124 18:03:46.034902 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e67aebe-8102-4767-9d4a-00c5e0317271-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:03:46 crc kubenswrapper[4768]: I1124 18:03:46.035104 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e67aebe-8102-4767-9d4a-00c5e0317271-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:03:46 crc kubenswrapper[4768]: E1124 18:03:46.247397 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a" Nov 24 18:03:46 crc kubenswrapper[4768]: E1124 18:03:46.247646 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jj7lq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58bb8d67cc-b6vk2_openstack-operators(8d92c413-b62d-4896-ae13-1ee9608aa65a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:46 crc kubenswrapper[4768]: I1124 18:03:46.570900 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlt7" event={"ID":"5e67aebe-8102-4767-9d4a-00c5e0317271","Type":"ContainerDied","Data":"0f213a9f6ab918cc001a7b7d9302fc3abc49d4262d037157ed40f1bc4aa4e6eb"} Nov 24 18:03:46 crc kubenswrapper[4768]: I1124 18:03:46.570953 4768 scope.go:117] "RemoveContainer" containerID="c7695ce01100f7728d63fe13b73ad16744e2f6ee5c9edaf5a587323277d0c1e3" Nov 24 18:03:46 crc kubenswrapper[4768]: I1124 18:03:46.570984 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzlt7" Nov 24 18:03:46 crc kubenswrapper[4768]: I1124 18:03:46.602456 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dzlt7"] Nov 24 18:03:46 crc kubenswrapper[4768]: I1124 18:03:46.608865 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dzlt7"] Nov 24 18:03:46 crc kubenswrapper[4768]: E1124 18:03:46.679720 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:f0688f6a55b7b548aaafd5c2c4f0749a43e7ea447c62a24e8b35257c5d8ba17f" Nov 24 18:03:46 crc kubenswrapper[4768]: E1124 18:03:46.679981 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:f0688f6a55b7b548aaafd5c2c4f0749a43e7ea447c62a24e8b35257c5d8ba17f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2spts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-858778c9dc-2wljz_openstack-operators(b44a0f95-c792-4375-9292-34a95608c64f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:47 crc kubenswrapper[4768]: E1124 18:03:47.110084 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd" Nov 24 18:03:47 crc kubenswrapper[4768]: E1124 18:03:47.110738 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4djs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-544b9bb9-lv927_openstack-operators(d54c925d-91d6-4bb8-acff-623c4f213352): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:47 crc kubenswrapper[4768]: E1124 18:03:47.495635 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377" Nov 24 18:03:47 crc kubenswrapper[4768]: E1124 18:03:47.495823 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nv7zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5bfcdc958c-m6skf_openstack-operators(ab3b5e40-6284-45cb-822e-a9490b1794c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:47 crc kubenswrapper[4768]: E1124 18:03:47.861725 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f" Nov 24 18:03:47 crc kubenswrapper[4768]: E1124 18:03:47.861881 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jnsb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-7d695c9b56-jg4mn_openstack-operators(52de35ae-ab63-4e1b-88d1-e42033ee56b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:03:47 crc kubenswrapper[4768]: I1124 18:03:47.914877 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e67aebe-8102-4767-9d4a-00c5e0317271" path="/var/lib/kubelet/pods/5e67aebe-8102-4767-9d4a-00c5e0317271/volumes" Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.399452 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.516198 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49j48\" (UniqueName: \"kubernetes.io/projected/7447f851-9eef-48b9-849e-ac7a51793472-kube-api-access-49j48\") pod \"7447f851-9eef-48b9-849e-ac7a51793472\" (UID: \"7447f851-9eef-48b9-849e-ac7a51793472\") " Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.516292 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7447f851-9eef-48b9-849e-ac7a51793472-utilities\") pod \"7447f851-9eef-48b9-849e-ac7a51793472\" (UID: \"7447f851-9eef-48b9-849e-ac7a51793472\") " Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.516353 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7447f851-9eef-48b9-849e-ac7a51793472-catalog-content\") pod \"7447f851-9eef-48b9-849e-ac7a51793472\" (UID: \"7447f851-9eef-48b9-849e-ac7a51793472\") " Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.517227 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7447f851-9eef-48b9-849e-ac7a51793472-utilities" (OuterVolumeSpecName: "utilities") pod "7447f851-9eef-48b9-849e-ac7a51793472" (UID: "7447f851-9eef-48b9-849e-ac7a51793472"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.536071 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7447f851-9eef-48b9-849e-ac7a51793472-kube-api-access-49j48" (OuterVolumeSpecName: "kube-api-access-49j48") pod "7447f851-9eef-48b9-849e-ac7a51793472" (UID: "7447f851-9eef-48b9-849e-ac7a51793472"). InnerVolumeSpecName "kube-api-access-49j48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.563706 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7447f851-9eef-48b9-849e-ac7a51793472-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7447f851-9eef-48b9-849e-ac7a51793472" (UID: "7447f851-9eef-48b9-849e-ac7a51793472"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.607112 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jjlh" event={"ID":"7447f851-9eef-48b9-849e-ac7a51793472","Type":"ContainerDied","Data":"7919abb051c4dc27bc99bfdb9b32d591570220fe9cd9361dbde4b7dd61863203"} Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.607187 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jjlh" Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.618326 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7447f851-9eef-48b9-849e-ac7a51793472-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.618354 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7447f851-9eef-48b9-849e-ac7a51793472-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.618366 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49j48\" (UniqueName: \"kubernetes.io/projected/7447f851-9eef-48b9-849e-ac7a51793472-kube-api-access-49j48\") on node \"crc\" DevicePath \"\"" Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.641374 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6jjlh"] Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.645999 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6jjlh"] Nov 24 18:03:51 crc kubenswrapper[4768]: I1124 18:03:51.907190 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7447f851-9eef-48b9-849e-ac7a51793472" path="/var/lib/kubelet/pods/7447f851-9eef-48b9-849e-ac7a51793472/volumes" Nov 24 18:03:52 crc kubenswrapper[4768]: I1124 18:03:52.464126 4768 scope.go:117] "RemoveContainer" containerID="55ec5d4f13ae3252bbac738285e62133b4dd2e06494655b8977d17833134daa5" Nov 24 18:03:53 crc kubenswrapper[4768]: I1124 18:03:53.054241 4768 scope.go:117] "RemoveContainer" containerID="8012cc4857fc2773853df36a6b23e6f5c1ed6053c564d2922b3f98331e4b6046" Nov 24 18:03:53 crc kubenswrapper[4768]: I1124 18:03:53.185033 4768 scope.go:117] "RemoveContainer" containerID="b600176374993e364a035c880a47d3a7a62d9306df0f7d4e6af352aa8710677a" Nov 24 18:03:53 crc kubenswrapper[4768]: I1124 18:03:53.311098 4768 scope.go:117] "RemoveContainer" containerID="557e217321498a3b8c3e9981d64ac2c67bb7d03f6d1cfbe23079f57aea81e307" Nov 24 18:03:53 crc kubenswrapper[4768]: I1124 18:03:53.470343 4768 scope.go:117] "RemoveContainer" containerID="19eeecb5e8ab8b10b2267db036c6bacdd34b12001c0db227a4d8317d88b408a0" Nov 24 18:03:53 crc kubenswrapper[4768]: I1124 18:03:53.629837 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh" event={"ID":"8d6fc3b4-896a-4480-9371-930a2882151e","Type":"ContainerStarted","Data":"79486c7693f50317d9ea0c8adfff2578c7c21749918db4059c7b3534e82972f0"} Nov 24 18:03:53 crc kubenswrapper[4768]: I1124 18:03:53.632147 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" event={"ID":"78e75462-3120-4d07-a571-56727914e173","Type":"ContainerStarted","Data":"8595ad53a1a74765e4294551d53e50e22f0fd011342966ba2fc4276a2301cbd8"} Nov 24 18:03:53 crc kubenswrapper[4768]: I1124 18:03:53.635071 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" event={"ID":"4d4b069e-80e6-409b-aeee-130ac4351f32","Type":"ContainerStarted","Data":"6d644f43eadf0d03450a735b6fcafd70bc660e2640c24a3ae1adafc3b3a35060"} Nov 24 18:03:54 crc kubenswrapper[4768]: I1124 18:03:54.645437 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" event={"ID":"583db3d6-5f9c-4ce1-8214-06963fe50f96","Type":"ContainerStarted","Data":"543dbb768fe73a0e628b729cd6ef483548432938b2b6263baf662a2544485856"} Nov 24 18:03:54 crc kubenswrapper[4768]: I1124 18:03:54.648087 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" event={"ID":"c6d6eee2-6cb1-411d-837f-921b1c6c92fb","Type":"ContainerStarted","Data":"bc3b68dc995f8707bba46d8200a46ec54382ebb282789e60d578cc27851a8fe3"} Nov 24 18:03:54 crc kubenswrapper[4768]: I1124 18:03:54.651913 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k" event={"ID":"dfa124f2-a194-4cae-bfed-eb56288e56a6","Type":"ContainerStarted","Data":"f11bc6955560875e6ece382c7486f3ae24385caac8f8e2b84d35d4741ec26c50"} Nov 24 18:03:54 crc kubenswrapper[4768]: I1124 18:03:54.656577 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" event={"ID":"0f74f3df-ed63-4105-882e-c3122177da3a","Type":"ContainerStarted","Data":"95f67b60c352154952318cf8c160845718a816fe7bcdd2aa35fb6103a8cd200e"} Nov 24 18:03:54 crc kubenswrapper[4768]: I1124 18:03:54.671161 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-csz8k" podStartSLOduration=4.171909402 podStartE2EDuration="35.671144439s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.562208897 +0000 UTC m=+840.422790674" lastFinishedPulling="2025-11-24 18:03:53.061443934 +0000 UTC m=+871.922025711" observedRunningTime="2025-11-24 18:03:54.66600232 +0000 UTC m=+873.526584097" watchObservedRunningTime="2025-11-24 18:03:54.671144439 +0000 UTC m=+873.531726216" Nov 24 18:03:54 crc kubenswrapper[4768]: E1124 18:03:54.794791 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv" podUID="1f0a9442-916e-442d-bb0f-6060ba5915c8" Nov 24 18:03:54 crc kubenswrapper[4768]: E1124 18:03:54.840195 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8" podUID="28171867-a10a-4f0c-840d-ce55038bcd93" Nov 24 18:03:54 crc kubenswrapper[4768]: E1124 18:03:54.857472 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl" podUID="2c04229f-5a27-4477-816d-60d5f1977144" Nov 24 18:03:54 crc kubenswrapper[4768]: E1124 18:03:54.932109 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" podUID="b44a0f95-c792-4375-9292-34a95608c64f" Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.032945 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" podUID="8d92c413-b62d-4896-ae13-1ee9608aa65a" Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.126280 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" podUID="ab197189-f8ba-4b06-b62a-73dd90994a39" Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.198197 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" podUID="d54c925d-91d6-4bb8-acff-623c4f213352" Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.266957 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr" podUID="7a599ec7-7361-4e08-8d81-3cfc208d41b5" Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.366463 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv" podUID="29ac0137-f29a-4a1f-8435-f4ec688a5948" Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.410747 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" podUID="ab3b5e40-6284-45cb-822e-a9490b1794c5" Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.498364 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz" podUID="8fe91de1-efe8-43e5-8b29-89043d06e880" Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.519119 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx" podUID="34b164fd-5d2f-4c00-83dc-ad8a90f4b94c" Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.582452 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r" podUID="c6d746c7-cf41-4ebd-95ba-e23836f6e5d4" Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.650104 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj" podUID="afa155f0-dde8-4d99-a454-527207b3189c" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.667583 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r" event={"ID":"c6d746c7-cf41-4ebd-95ba-e23836f6e5d4","Type":"ContainerStarted","Data":"5e2de1bf5460b5b71c3c7b8e59ea6353dea79d0ee08e9da2b2424506fcbc236f"} Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.670264 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" podUID="52de35ae-ab63-4e1b-88d1-e42033ee56b7" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.672326 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" event={"ID":"583db3d6-5f9c-4ce1-8214-06963fe50f96","Type":"ContainerStarted","Data":"d9343a201c037f95aa17db3d05ca92a3dc4f972afbf63bb2cf09322bbc7879ae"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.672824 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.674423 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" event={"ID":"52de35ae-ab63-4e1b-88d1-e42033ee56b7","Type":"ContainerStarted","Data":"87be466007a8dcae47f72df9b7aa7f3f9da23551e8cd9b5e188c97a3de8563d1"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.676690 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" event={"ID":"d54c925d-91d6-4bb8-acff-623c4f213352","Type":"ContainerStarted","Data":"fcaec76de6e171b09ff35739dc48ba7f076af54595abd158b361d1fded980faf"} Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.677724 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" podUID="d54c925d-91d6-4bb8-acff-623c4f213352" Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.677779 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f\\\"\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" podUID="52de35ae-ab63-4e1b-88d1-e42033ee56b7" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.679000 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr" event={"ID":"7a599ec7-7361-4e08-8d81-3cfc208d41b5","Type":"ContainerStarted","Data":"8c1c87d0302ea944e461f529095c0cf86ca3669e0bb74e457c4ca5de8e8cae85"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.681615 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" event={"ID":"8d92c413-b62d-4896-ae13-1ee9608aa65a","Type":"ContainerStarted","Data":"459ca056b2fd4d1403394ca084d9b605913548bb14d01ecd26301b1d0f99f923"} Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.683281 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a\\\"\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" podUID="8d92c413-b62d-4896-ae13-1ee9608aa65a" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.687782 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" event={"ID":"b44a0f95-c792-4375-9292-34a95608c64f","Type":"ContainerStarted","Data":"f84071fa9a365add34d28f1e12a9a7ce89765608860ac420b98fdf96a51ed423"} Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.688965 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:f0688f6a55b7b548aaafd5c2c4f0749a43e7ea447c62a24e8b35257c5d8ba17f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" podUID="b44a0f95-c792-4375-9292-34a95608c64f" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.690598 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" event={"ID":"0f74f3df-ed63-4105-882e-c3122177da3a","Type":"ContainerStarted","Data":"8780b9e08325adc37b0360577010c4a5ff755e5235af38dab13a03585e460337"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.690803 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.692147 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv" event={"ID":"1f0a9442-916e-442d-bb0f-6060ba5915c8","Type":"ContainerStarted","Data":"e60d35e346d9f12f06aa3d3ceff33139512b42cb493fdd8b1149a2a3422ce460"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.704415 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" event={"ID":"4d4b069e-80e6-409b-aeee-130ac4351f32","Type":"ContainerStarted","Data":"be0bc76460e948e9be70bb8837374dc3bb268cb0c518d70367c8dae534ca4281"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.705127 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.711146 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" event={"ID":"ab3b5e40-6284-45cb-822e-a9490b1794c5","Type":"ContainerStarted","Data":"98222377cd7ed28e63a523d16edfa1d32c06395ffe7440f027b822100e9da5e2"} Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.714007 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" podUID="ab3b5e40-6284-45cb-822e-a9490b1794c5" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.715292 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8" event={"ID":"28171867-a10a-4f0c-840d-ce55038bcd93","Type":"ContainerStarted","Data":"df09d37b767f98b3c153f601b74b0f86fed576dc0a32a3eac35fddb685f08ebb"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.724647 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" event={"ID":"ab197189-f8ba-4b06-b62a-73dd90994a39","Type":"ContainerStarted","Data":"668f880e3f3443d3901a1552923d12ebbd27bb1eba6123257c2b75ce83d6656f"} Nov 24 18:03:55 crc kubenswrapper[4768]: E1124 18:03:55.726370 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:553b1288b330ad05771d59c6b73c1681c95f457e8475682f9ad0d2e6b85f37e9\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" podUID="ab197189-f8ba-4b06-b62a-73dd90994a39" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.732614 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl" event={"ID":"2c04229f-5a27-4477-816d-60d5f1977144","Type":"ContainerStarted","Data":"ed5c36db06bc16bddfd1a7e2cb36cbd58af286eafe0f05eb527ad5c486bffa41"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.747174 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj" event={"ID":"afa155f0-dde8-4d99-a454-527207b3189c","Type":"ContainerStarted","Data":"a40bb76f2fc8d535dc658fa95ae2ad86cc13a30a81b629600b8eae35ef93ea87"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.761275 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv" event={"ID":"29ac0137-f29a-4a1f-8435-f4ec688a5948","Type":"ContainerStarted","Data":"e9175ef15e80aa9c128bc0244b5a7cb9c101829ae59d7b850499435da73202ed"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.765145 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" podStartSLOduration=3.626868456 podStartE2EDuration="36.765130739s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.443976552 +0000 UTC m=+840.304558329" lastFinishedPulling="2025-11-24 18:03:54.582238815 +0000 UTC m=+873.442820612" observedRunningTime="2025-11-24 18:03:55.744754473 +0000 UTC m=+874.605336250" watchObservedRunningTime="2025-11-24 18:03:55.765130739 +0000 UTC m=+874.625712516" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.781906 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh" event={"ID":"8d6fc3b4-896a-4480-9371-930a2882151e","Type":"ContainerStarted","Data":"ca33ca01c661f8d6ef70a178504846edd871bb13f7bc28423e556c14a50b3ddd"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.781978 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.785395 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz" event={"ID":"8fe91de1-efe8-43e5-8b29-89043d06e880","Type":"ContainerStarted","Data":"cad91ed53a675c36d5c6a0d31d64f80bcd0671015941f6b021f8fadba2e11140"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.789782 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" event={"ID":"78e75462-3120-4d07-a571-56727914e173","Type":"ContainerStarted","Data":"b614c047a8d18950ee5c90eef6958b3cf4114cf99581892104a3ee3b5194e49c"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.790476 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.802169 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" event={"ID":"c6d6eee2-6cb1-411d-837f-921b1c6c92fb","Type":"ContainerStarted","Data":"87094080a6bbb53fe394f95b0c1231e56137138ca50e81cec660e4a4f93f9ad0"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.802777 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.810199 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx" event={"ID":"34b164fd-5d2f-4c00-83dc-ad8a90f4b94c","Type":"ContainerStarted","Data":"2351cb68a17eda48fa8b0a12180df7d89d42feab4ae4e17b7cebb6982500a3a3"} Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.885776 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" podStartSLOduration=5.295832599 podStartE2EDuration="36.885754198s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.445930667 +0000 UTC m=+840.306512444" lastFinishedPulling="2025-11-24 18:03:53.035852266 +0000 UTC m=+871.896434043" observedRunningTime="2025-11-24 18:03:55.880021507 +0000 UTC m=+874.740603274" watchObservedRunningTime="2025-11-24 18:03:55.885754198 +0000 UTC m=+874.746335975" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.953226 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" podStartSLOduration=3.320220126 podStartE2EDuration="36.953208007s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.478755003 +0000 UTC m=+840.339336790" lastFinishedPulling="2025-11-24 18:03:55.111742894 +0000 UTC m=+873.972324671" observedRunningTime="2025-11-24 18:03:55.950683743 +0000 UTC m=+874.811265520" watchObservedRunningTime="2025-11-24 18:03:55.953208007 +0000 UTC m=+874.813789784" Nov 24 18:03:55 crc kubenswrapper[4768]: I1124 18:03:55.969362 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh" podStartSLOduration=3.328597509 podStartE2EDuration="36.969336709s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.124644967 +0000 UTC m=+839.985226744" lastFinishedPulling="2025-11-24 18:03:54.765384177 +0000 UTC m=+873.625965944" observedRunningTime="2025-11-24 18:03:55.967822582 +0000 UTC m=+874.828404359" watchObservedRunningTime="2025-11-24 18:03:55.969336709 +0000 UTC m=+874.829918486" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.022364 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" podStartSLOduration=3.745273623 podStartE2EDuration="37.022345306s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.475155934 +0000 UTC m=+840.335737711" lastFinishedPulling="2025-11-24 18:03:54.752227627 +0000 UTC m=+873.612809394" observedRunningTime="2025-11-24 18:03:55.994963927 +0000 UTC m=+874.855545704" watchObservedRunningTime="2025-11-24 18:03:56.022345306 +0000 UTC m=+874.882927083" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.153061 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" podStartSLOduration=3.9621552429999998 podStartE2EDuration="37.153044501s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.496724269 +0000 UTC m=+840.357306046" lastFinishedPulling="2025-11-24 18:03:54.687613527 +0000 UTC m=+873.548195304" observedRunningTime="2025-11-24 18:03:56.144666994 +0000 UTC m=+875.005248771" watchObservedRunningTime="2025-11-24 18:03:56.153044501 +0000 UTC m=+875.013626278" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.818094 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8" event={"ID":"28171867-a10a-4f0c-840d-ce55038bcd93","Type":"ContainerStarted","Data":"f4e24d72d9815107f4d04cc13f9ff2ff968993cd543fa1fe08e6c69bcd2608c5"} Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.818889 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.820643 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv" event={"ID":"1f0a9442-916e-442d-bb0f-6060ba5915c8","Type":"ContainerStarted","Data":"3b27980015ef8e679798923de58a88d25c67ea73311089c7bd27f5fed2c4539e"} Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.820841 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.822563 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx" event={"ID":"34b164fd-5d2f-4c00-83dc-ad8a90f4b94c","Type":"ContainerStarted","Data":"ebd48eec90211bb7a9270811d424f133daf0133fc99fe574d0a99e7b9e5a8f7a"} Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.822764 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.825963 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv" event={"ID":"29ac0137-f29a-4a1f-8435-f4ec688a5948","Type":"ContainerStarted","Data":"e2b5c81ef3f8be7396f32a35b22de1532924da36c4bfedf3a5b4d6ee57ba6688"} Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.826180 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.827992 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz" event={"ID":"8fe91de1-efe8-43e5-8b29-89043d06e880","Type":"ContainerStarted","Data":"dab5946cb133227c1e5ec23bba991aad5e75fe3c64966b3edc07621bfc238be7"} Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.829776 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr" event={"ID":"7a599ec7-7361-4e08-8d81-3cfc208d41b5","Type":"ContainerStarted","Data":"23cff45c9621410daa0132af2d7a3439ad2aacf6276492477f3f99de6bdb8fd0"} Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.830059 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.831604 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl" event={"ID":"2c04229f-5a27-4477-816d-60d5f1977144","Type":"ContainerStarted","Data":"66ee314c91dcdc0f4b565b633a3171e2cec5e0879869031ce308650e75f41e80"} Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.831726 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.833095 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r" event={"ID":"c6d746c7-cf41-4ebd-95ba-e23836f6e5d4","Type":"ContainerStarted","Data":"4818c404d5e14561b435aa7c36e216faf75d447a909433ab146a2e47cde2c4ac"} Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.833216 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.834752 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj" event={"ID":"afa155f0-dde8-4d99-a454-527207b3189c","Type":"ContainerStarted","Data":"67c980e4e2610c4174f0b7f7267d84c4d2326cfeaaf98c4347f17291e4f5ceb6"} Nov 24 18:03:56 crc kubenswrapper[4768]: E1124 18:03:56.836324 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" podUID="ab3b5e40-6284-45cb-822e-a9490b1794c5" Nov 24 18:03:56 crc kubenswrapper[4768]: E1124 18:03:56.837538 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" podUID="d54c925d-91d6-4bb8-acff-623c4f213352" Nov 24 18:03:56 crc kubenswrapper[4768]: E1124 18:03:56.837677 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f\\\"\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" podUID="52de35ae-ab63-4e1b-88d1-e42033ee56b7" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.849141 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8" podStartSLOduration=2.677503675 podStartE2EDuration="37.849125143s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:20.629462169 +0000 UTC m=+839.490043946" lastFinishedPulling="2025-11-24 18:03:55.801083637 +0000 UTC m=+874.661665414" observedRunningTime="2025-11-24 18:03:56.846262653 +0000 UTC m=+875.706844430" watchObservedRunningTime="2025-11-24 18:03:56.849125143 +0000 UTC m=+875.709706920" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.902502 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv" podStartSLOduration=2.894028379 podStartE2EDuration="37.902471075s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.431836678 +0000 UTC m=+840.292418455" lastFinishedPulling="2025-11-24 18:03:56.440279374 +0000 UTC m=+875.300861151" observedRunningTime="2025-11-24 18:03:56.898115609 +0000 UTC m=+875.758697386" watchObservedRunningTime="2025-11-24 18:03:56.902471075 +0000 UTC m=+875.763052852" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.923146 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj" podStartSLOduration=2.812141509 podStartE2EDuration="37.923131306s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.443933021 +0000 UTC m=+840.304514798" lastFinishedPulling="2025-11-24 18:03:56.554922818 +0000 UTC m=+875.415504595" observedRunningTime="2025-11-24 18:03:56.918510926 +0000 UTC m=+875.779092703" watchObservedRunningTime="2025-11-24 18:03:56.923131306 +0000 UTC m=+875.783713083" Nov 24 18:03:56 crc kubenswrapper[4768]: I1124 18:03:56.962413 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl" podStartSLOduration=3.181866837 podStartE2EDuration="37.962392313s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.406663403 +0000 UTC m=+840.267245180" lastFinishedPulling="2025-11-24 18:03:56.187188889 +0000 UTC m=+875.047770656" observedRunningTime="2025-11-24 18:03:56.95764087 +0000 UTC m=+875.818222647" watchObservedRunningTime="2025-11-24 18:03:56.962392313 +0000 UTC m=+875.822974090" Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.000876 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx" podStartSLOduration=2.782742459 podStartE2EDuration="38.000858295s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.337565315 +0000 UTC m=+840.198147102" lastFinishedPulling="2025-11-24 18:03:56.555681161 +0000 UTC m=+875.416262938" observedRunningTime="2025-11-24 18:03:56.996242494 +0000 UTC m=+875.856824271" watchObservedRunningTime="2025-11-24 18:03:57.000858295 +0000 UTC m=+875.861440072" Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.053199 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz" podStartSLOduration=3.296551054 podStartE2EDuration="38.053184291s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.474911726 +0000 UTC m=+840.335493503" lastFinishedPulling="2025-11-24 18:03:56.231544963 +0000 UTC m=+875.092126740" observedRunningTime="2025-11-24 18:03:57.051130584 +0000 UTC m=+875.911712351" watchObservedRunningTime="2025-11-24 18:03:57.053184291 +0000 UTC m=+875.913766068" Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.077065 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv" podStartSLOduration=3.054599706 podStartE2EDuration="38.077041507s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.362713829 +0000 UTC m=+840.223295606" lastFinishedPulling="2025-11-24 18:03:56.38515563 +0000 UTC m=+875.245737407" observedRunningTime="2025-11-24 18:03:57.072940766 +0000 UTC m=+875.933522553" watchObservedRunningTime="2025-11-24 18:03:57.077041507 +0000 UTC m=+875.937623284" Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.133049 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r" podStartSLOduration=2.687443616 podStartE2EDuration="38.133033696s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:20.633315435 +0000 UTC m=+839.493897212" lastFinishedPulling="2025-11-24 18:03:56.078905515 +0000 UTC m=+874.939487292" observedRunningTime="2025-11-24 18:03:57.132215142 +0000 UTC m=+875.992796919" watchObservedRunningTime="2025-11-24 18:03:57.133033696 +0000 UTC m=+875.993615473" Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.151773 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr" podStartSLOduration=2.830690407 podStartE2EDuration="38.151757614s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.066663508 +0000 UTC m=+839.927245275" lastFinishedPulling="2025-11-24 18:03:56.387730705 +0000 UTC m=+875.248312482" observedRunningTime="2025-11-24 18:03:57.150286468 +0000 UTC m=+876.010868245" watchObservedRunningTime="2025-11-24 18:03:57.151757614 +0000 UTC m=+876.012339391" Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.842013 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" event={"ID":"ab197189-f8ba-4b06-b62a-73dd90994a39","Type":"ContainerStarted","Data":"a9a5d1b1d375b67240b5fcf6ea2bf294db740a1400712293248f61a5da3ebe6b"} Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.842594 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.843597 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" event={"ID":"8d92c413-b62d-4896-ae13-1ee9608aa65a","Type":"ContainerStarted","Data":"4b6a84928d6e5366a4488c906721604e6f3a61ffef20c7d05eea8b73438ec8b3"} Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.843766 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.845183 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" event={"ID":"b44a0f95-c792-4375-9292-34a95608c64f","Type":"ContainerStarted","Data":"db50fefa7853728d83f1658f5866f3060a2c4ddd1559aeb25942a08c863f8492"} Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.847334 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz" Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.847371 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj" Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.863918 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" podStartSLOduration=2.137862161 podStartE2EDuration="38.863899246s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:20.532817031 +0000 UTC m=+839.393398808" lastFinishedPulling="2025-11-24 18:03:57.258854116 +0000 UTC m=+876.119435893" observedRunningTime="2025-11-24 18:03:57.858012313 +0000 UTC m=+876.718594090" watchObservedRunningTime="2025-11-24 18:03:57.863899246 +0000 UTC m=+876.724481023" Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.874136 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" podStartSLOduration=2.561530646 podStartE2EDuration="38.874125185s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.134363666 +0000 UTC m=+839.994945443" lastFinishedPulling="2025-11-24 18:03:57.446958205 +0000 UTC m=+876.307539982" observedRunningTime="2025-11-24 18:03:57.873712318 +0000 UTC m=+876.734294105" watchObservedRunningTime="2025-11-24 18:03:57.874125185 +0000 UTC m=+876.734706962" Nov 24 18:03:57 crc kubenswrapper[4768]: I1124 18:03:57.889928 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" podStartSLOduration=2.722421747 podStartE2EDuration="38.88990705s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.058173943 +0000 UTC m=+839.918755720" lastFinishedPulling="2025-11-24 18:03:57.225659246 +0000 UTC m=+876.086241023" observedRunningTime="2025-11-24 18:03:57.889677406 +0000 UTC m=+876.750259193" watchObservedRunningTime="2025-11-24 18:03:57.88990705 +0000 UTC m=+876.750488827" Nov 24 18:03:59 crc kubenswrapper[4768]: I1124 18:03:59.643185 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" Nov 24 18:03:59 crc kubenswrapper[4768]: I1124 18:03:59.732688 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-5sprh" Nov 24 18:03:59 crc kubenswrapper[4768]: I1124 18:03:59.877082 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-4mqdl" Nov 24 18:04:00 crc kubenswrapper[4768]: I1124 18:04:00.091395 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-lfbgz" Nov 24 18:04:00 crc kubenswrapper[4768]: I1124 18:04:00.264763 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-2264q" Nov 24 18:04:00 crc kubenswrapper[4768]: I1124 18:04:00.310310 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-fz64p" Nov 24 18:04:00 crc kubenswrapper[4768]: I1124 18:04:00.399680 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-2t64b" Nov 24 18:04:09 crc kubenswrapper[4768]: I1124 18:04:09.396449 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wtd7r" Nov 24 18:04:09 crc kubenswrapper[4768]: I1124 18:04:09.422926 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-nx9kk" Nov 24 18:04:09 crc kubenswrapper[4768]: I1124 18:04:09.505985 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-69fbff6fff-t2zl8" Nov 24 18:04:09 crc kubenswrapper[4768]: I1124 18:04:09.527931 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-xw2jj" Nov 24 18:04:09 crc kubenswrapper[4768]: I1124 18:04:09.581997 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-k5fkx" Nov 24 18:04:09 crc kubenswrapper[4768]: I1124 18:04:09.652469 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-858778c9dc-2wljz" Nov 24 18:04:09 crc kubenswrapper[4768]: I1124 18:04:09.803410 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-b6vk2" Nov 24 18:04:09 crc kubenswrapper[4768]: I1124 18:04:09.824881 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-846gl" Nov 24 18:04:09 crc kubenswrapper[4768]: I1124 18:04:09.842986 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-hdfsr" Nov 24 18:04:09 crc kubenswrapper[4768]: I1124 18:04:09.894818 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-f95nv" Nov 24 18:04:10 crc kubenswrapper[4768]: I1124 18:04:10.131438 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-d2hdv" Nov 24 18:04:10 crc kubenswrapper[4768]: I1124 18:04:10.341594 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-4dwgz" Nov 24 18:04:19 crc kubenswrapper[4768]: I1124 18:04:19.003073 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" event={"ID":"52de35ae-ab63-4e1b-88d1-e42033ee56b7","Type":"ContainerStarted","Data":"7cb1538a85ade1fba8f6002190d72a27dfe75df294264b422562c647c6a0f7ea"} Nov 24 18:04:19 crc kubenswrapper[4768]: I1124 18:04:19.004773 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" Nov 24 18:04:19 crc kubenswrapper[4768]: I1124 18:04:19.005199 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" event={"ID":"d54c925d-91d6-4bb8-acff-623c4f213352","Type":"ContainerStarted","Data":"61e5b3616859d41b27a6822933c69e0c3797602f681ffc05b74f7632a7190bb8"} Nov 24 18:04:19 crc kubenswrapper[4768]: I1124 18:04:19.005443 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" Nov 24 18:04:19 crc kubenswrapper[4768]: I1124 18:04:19.012317 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" event={"ID":"ab3b5e40-6284-45cb-822e-a9490b1794c5","Type":"ContainerStarted","Data":"8779141bd664d0d90cd32109a668c136937664f36e515e5af73301be9af79d02"} Nov 24 18:04:19 crc kubenswrapper[4768]: I1124 18:04:19.012639 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" Nov 24 18:04:19 crc kubenswrapper[4768]: I1124 18:04:19.029389 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" podStartSLOduration=2.156530243 podStartE2EDuration="1m0.029363548s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:20.471711254 +0000 UTC m=+839.332293041" lastFinishedPulling="2025-11-24 18:04:18.344544439 +0000 UTC m=+897.205126346" observedRunningTime="2025-11-24 18:04:19.026083979 +0000 UTC m=+897.886665756" watchObservedRunningTime="2025-11-24 18:04:19.029363548 +0000 UTC m=+897.889945355" Nov 24 18:04:19 crc kubenswrapper[4768]: I1124 18:04:19.041347 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" podStartSLOduration=2.8529409 podStartE2EDuration="1m0.041330962s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:21.097087128 +0000 UTC m=+839.957668905" lastFinishedPulling="2025-11-24 18:04:18.28547719 +0000 UTC m=+897.146058967" observedRunningTime="2025-11-24 18:04:19.03942842 +0000 UTC m=+897.900010197" watchObservedRunningTime="2025-11-24 18:04:19.041330962 +0000 UTC m=+897.901912739" Nov 24 18:04:19 crc kubenswrapper[4768]: I1124 18:04:19.067773 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" podStartSLOduration=3.929853391 podStartE2EDuration="1m0.067751157s" podCreationTimestamp="2025-11-24 18:03:19 +0000 UTC" firstStartedPulling="2025-11-24 18:03:22.144951703 +0000 UTC m=+841.005533480" lastFinishedPulling="2025-11-24 18:04:18.282849469 +0000 UTC m=+897.143431246" observedRunningTime="2025-11-24 18:04:19.059123713 +0000 UTC m=+897.919705500" watchObservedRunningTime="2025-11-24 18:04:19.067751157 +0000 UTC m=+897.928332934" Nov 24 18:04:29 crc kubenswrapper[4768]: I1124 18:04:29.471052 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jg4mn" Nov 24 18:04:29 crc kubenswrapper[4768]: I1124 18:04:29.655218 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m6skf" Nov 24 18:04:31 crc kubenswrapper[4768]: I1124 18:04:31.537508 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-lv927" Nov 24 18:04:43 crc kubenswrapper[4768]: I1124 18:04:43.656625 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:04:43 crc kubenswrapper[4768]: I1124 18:04:43.657200 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.331038 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-98ncd"] Nov 24 18:04:49 crc kubenswrapper[4768]: E1124 18:04:49.332108 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerName="extract-utilities" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.332124 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerName="extract-utilities" Nov 24 18:04:49 crc kubenswrapper[4768]: E1124 18:04:49.332173 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerName="registry-server" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.332182 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerName="registry-server" Nov 24 18:04:49 crc kubenswrapper[4768]: E1124 18:04:49.332206 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7447f851-9eef-48b9-849e-ac7a51793472" containerName="registry-server" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.332214 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7447f851-9eef-48b9-849e-ac7a51793472" containerName="registry-server" Nov 24 18:04:49 crc kubenswrapper[4768]: E1124 18:04:49.332228 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerName="extract-content" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.332236 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerName="extract-content" Nov 24 18:04:49 crc kubenswrapper[4768]: E1124 18:04:49.332249 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7447f851-9eef-48b9-849e-ac7a51793472" containerName="extract-content" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.332257 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7447f851-9eef-48b9-849e-ac7a51793472" containerName="extract-content" Nov 24 18:04:49 crc kubenswrapper[4768]: E1124 18:04:49.332269 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7447f851-9eef-48b9-849e-ac7a51793472" containerName="extract-utilities" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.332277 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7447f851-9eef-48b9-849e-ac7a51793472" containerName="extract-utilities" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.332473 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7447f851-9eef-48b9-849e-ac7a51793472" containerName="registry-server" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.332515 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e67aebe-8102-4767-9d4a-00c5e0317271" containerName="registry-server" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.333461 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.339813 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.340404 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.340610 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-xl8kw" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.340815 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.360723 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-98ncd"] Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.428861 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d52769e-02ab-4b7a-8df8-4448bf606f7f-config\") pod \"dnsmasq-dns-675f4bcbfc-98ncd\" (UID: \"0d52769e-02ab-4b7a-8df8-4448bf606f7f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.428918 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6828\" (UniqueName: \"kubernetes.io/projected/0d52769e-02ab-4b7a-8df8-4448bf606f7f-kube-api-access-l6828\") pod \"dnsmasq-dns-675f4bcbfc-98ncd\" (UID: \"0d52769e-02ab-4b7a-8df8-4448bf606f7f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.450399 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dj8sx"] Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.451768 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.453990 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.503937 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dj8sx"] Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.529499 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2a56b62-a01e-4d84-bbb9-8efb01152dad-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-dj8sx\" (UID: \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.529556 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6828\" (UniqueName: \"kubernetes.io/projected/0d52769e-02ab-4b7a-8df8-4448bf606f7f-kube-api-access-l6828\") pod \"dnsmasq-dns-675f4bcbfc-98ncd\" (UID: \"0d52769e-02ab-4b7a-8df8-4448bf606f7f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.529602 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a56b62-a01e-4d84-bbb9-8efb01152dad-config\") pod \"dnsmasq-dns-78dd6ddcc-dj8sx\" (UID: \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.529639 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n74mt\" (UniqueName: \"kubernetes.io/projected/c2a56b62-a01e-4d84-bbb9-8efb01152dad-kube-api-access-n74mt\") pod \"dnsmasq-dns-78dd6ddcc-dj8sx\" (UID: \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.529805 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d52769e-02ab-4b7a-8df8-4448bf606f7f-config\") pod \"dnsmasq-dns-675f4bcbfc-98ncd\" (UID: \"0d52769e-02ab-4b7a-8df8-4448bf606f7f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.530793 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d52769e-02ab-4b7a-8df8-4448bf606f7f-config\") pod \"dnsmasq-dns-675f4bcbfc-98ncd\" (UID: \"0d52769e-02ab-4b7a-8df8-4448bf606f7f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.550794 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6828\" (UniqueName: \"kubernetes.io/projected/0d52769e-02ab-4b7a-8df8-4448bf606f7f-kube-api-access-l6828\") pod \"dnsmasq-dns-675f4bcbfc-98ncd\" (UID: \"0d52769e-02ab-4b7a-8df8-4448bf606f7f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.631855 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a56b62-a01e-4d84-bbb9-8efb01152dad-config\") pod \"dnsmasq-dns-78dd6ddcc-dj8sx\" (UID: \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.632567 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n74mt\" (UniqueName: \"kubernetes.io/projected/c2a56b62-a01e-4d84-bbb9-8efb01152dad-kube-api-access-n74mt\") pod \"dnsmasq-dns-78dd6ddcc-dj8sx\" (UID: \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.632710 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2a56b62-a01e-4d84-bbb9-8efb01152dad-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-dj8sx\" (UID: \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.633094 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a56b62-a01e-4d84-bbb9-8efb01152dad-config\") pod \"dnsmasq-dns-78dd6ddcc-dj8sx\" (UID: \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.635784 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2a56b62-a01e-4d84-bbb9-8efb01152dad-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-dj8sx\" (UID: \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.658594 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n74mt\" (UniqueName: \"kubernetes.io/projected/c2a56b62-a01e-4d84-bbb9-8efb01152dad-kube-api-access-n74mt\") pod \"dnsmasq-dns-78dd6ddcc-dj8sx\" (UID: \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.660378 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.765021 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:04:49 crc kubenswrapper[4768]: I1124 18:04:49.995886 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dj8sx"] Nov 24 18:04:50 crc kubenswrapper[4768]: I1124 18:04:50.008273 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 18:04:50 crc kubenswrapper[4768]: I1124 18:04:50.090089 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-98ncd"] Nov 24 18:04:50 crc kubenswrapper[4768]: W1124 18:04:50.099465 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d52769e_02ab_4b7a_8df8_4448bf606f7f.slice/crio-b2ffb004b43617e56bb3a8e42dd662c0f72ca3d32c565eded14187909881bb0d WatchSource:0}: Error finding container b2ffb004b43617e56bb3a8e42dd662c0f72ca3d32c565eded14187909881bb0d: Status 404 returned error can't find the container with id b2ffb004b43617e56bb3a8e42dd662c0f72ca3d32c565eded14187909881bb0d Nov 24 18:04:50 crc kubenswrapper[4768]: I1124 18:04:50.250238 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" event={"ID":"0d52769e-02ab-4b7a-8df8-4448bf606f7f","Type":"ContainerStarted","Data":"b2ffb004b43617e56bb3a8e42dd662c0f72ca3d32c565eded14187909881bb0d"} Nov 24 18:04:50 crc kubenswrapper[4768]: I1124 18:04:50.252195 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" event={"ID":"c2a56b62-a01e-4d84-bbb9-8efb01152dad","Type":"ContainerStarted","Data":"ec46f1afe0dbde5a594790a5232ae7a9e4bc9f72e14d5a0dcef00da4890e6a5b"} Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.363779 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-98ncd"] Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.394980 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gv6m8"] Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.396348 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.412983 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gv6m8"] Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.472151 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6eeedbc-36a6-4a1b-b879-be0c92682663-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gv6m8\" (UID: \"f6eeedbc-36a6-4a1b-b879-be0c92682663\") " pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.472263 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-678z6\" (UniqueName: \"kubernetes.io/projected/f6eeedbc-36a6-4a1b-b879-be0c92682663-kube-api-access-678z6\") pod \"dnsmasq-dns-666b6646f7-gv6m8\" (UID: \"f6eeedbc-36a6-4a1b-b879-be0c92682663\") " pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.472291 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eeedbc-36a6-4a1b-b879-be0c92682663-config\") pod \"dnsmasq-dns-666b6646f7-gv6m8\" (UID: \"f6eeedbc-36a6-4a1b-b879-be0c92682663\") " pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.573375 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6eeedbc-36a6-4a1b-b879-be0c92682663-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gv6m8\" (UID: \"f6eeedbc-36a6-4a1b-b879-be0c92682663\") " pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.573472 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-678z6\" (UniqueName: \"kubernetes.io/projected/f6eeedbc-36a6-4a1b-b879-be0c92682663-kube-api-access-678z6\") pod \"dnsmasq-dns-666b6646f7-gv6m8\" (UID: \"f6eeedbc-36a6-4a1b-b879-be0c92682663\") " pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.573521 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eeedbc-36a6-4a1b-b879-be0c92682663-config\") pod \"dnsmasq-dns-666b6646f7-gv6m8\" (UID: \"f6eeedbc-36a6-4a1b-b879-be0c92682663\") " pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.574906 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eeedbc-36a6-4a1b-b879-be0c92682663-config\") pod \"dnsmasq-dns-666b6646f7-gv6m8\" (UID: \"f6eeedbc-36a6-4a1b-b879-be0c92682663\") " pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.574913 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6eeedbc-36a6-4a1b-b879-be0c92682663-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gv6m8\" (UID: \"f6eeedbc-36a6-4a1b-b879-be0c92682663\") " pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.607000 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-678z6\" (UniqueName: \"kubernetes.io/projected/f6eeedbc-36a6-4a1b-b879-be0c92682663-kube-api-access-678z6\") pod \"dnsmasq-dns-666b6646f7-gv6m8\" (UID: \"f6eeedbc-36a6-4a1b-b879-be0c92682663\") " pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.667219 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dj8sx"] Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.695575 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9x98m"] Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.696763 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.707271 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9x98m"] Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.726645 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.777249 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-config\") pod \"dnsmasq-dns-57d769cc4f-9x98m\" (UID: \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.777334 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9x98m\" (UID: \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.777367 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f2td\" (UniqueName: \"kubernetes.io/projected/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-kube-api-access-8f2td\") pod \"dnsmasq-dns-57d769cc4f-9x98m\" (UID: \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.880775 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9x98m\" (UID: \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.881226 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f2td\" (UniqueName: \"kubernetes.io/projected/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-kube-api-access-8f2td\") pod \"dnsmasq-dns-57d769cc4f-9x98m\" (UID: \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.881276 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-config\") pod \"dnsmasq-dns-57d769cc4f-9x98m\" (UID: \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.881881 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9x98m\" (UID: \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.882052 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-config\") pod \"dnsmasq-dns-57d769cc4f-9x98m\" (UID: \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:04:52 crc kubenswrapper[4768]: I1124 18:04:52.911393 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f2td\" (UniqueName: \"kubernetes.io/projected/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-kube-api-access-8f2td\") pod \"dnsmasq-dns-57d769cc4f-9x98m\" (UID: \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.022189 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.261596 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gv6m8"] Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.527039 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.528721 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.533885 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.534047 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mn6tk" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.534105 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.534197 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.534765 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.534761 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.534874 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.547765 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.593624 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.593715 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.593742 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.593775 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.593799 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.593900 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-config-data\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.593949 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f67f41ac-4a1d-45c4-baaf-500062871fcb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.593976 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.594025 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrrl\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-kube-api-access-mcrrl\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.594053 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.594078 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f67f41ac-4a1d-45c4-baaf-500062871fcb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.695684 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f67f41ac-4a1d-45c4-baaf-500062871fcb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.695771 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.695827 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcrrl\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-kube-api-access-mcrrl\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.695848 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.695868 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f67f41ac-4a1d-45c4-baaf-500062871fcb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.695934 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.696014 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.696123 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.696179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.696198 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.696336 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-config-data\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.696932 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.697029 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.697166 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.697614 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.697866 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-config-data\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.698712 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.703851 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f67f41ac-4a1d-45c4-baaf-500062871fcb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.704157 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.704727 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f67f41ac-4a1d-45c4-baaf-500062871fcb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.706693 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.716671 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcrrl\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-kube-api-access-mcrrl\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.723940 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.811601 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.814168 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.817397 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.817473 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.817652 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.817861 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.817979 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.818204 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-l62mf" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.818224 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.824133 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.861025 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.899658 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.899703 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwvnw\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-kube-api-access-jwvnw\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.899734 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.899803 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.899893 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.899924 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.899954 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.899989 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96e8147b-fab1-4601-b8c7-00764af14ba7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.900016 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.900070 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96e8147b-fab1-4601-b8c7-00764af14ba7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:53 crc kubenswrapper[4768]: I1124 18:04:53.900212 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.003958 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.004016 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.004036 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.004053 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.004074 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96e8147b-fab1-4601-b8c7-00764af14ba7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.004090 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.004125 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96e8147b-fab1-4601-b8c7-00764af14ba7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.004179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.004213 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.004234 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwvnw\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-kube-api-access-jwvnw\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.004261 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.005086 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.005274 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.005369 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.005392 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.005659 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.005700 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.009120 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.009577 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96e8147b-fab1-4601-b8c7-00764af14ba7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.009757 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96e8147b-fab1-4601-b8c7-00764af14ba7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.016986 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.022900 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwvnw\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-kube-api-access-jwvnw\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.024121 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:54 crc kubenswrapper[4768]: I1124 18:04:54.139648 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.508162 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.510088 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.511773 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.512870 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.513046 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ztp64" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.521834 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.525117 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.532082 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.629338 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.629387 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.629428 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.629445 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.629466 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cchxt\" (UniqueName: \"kubernetes.io/projected/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-kube-api-access-cchxt\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.629506 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.629646 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-kolla-config\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.629735 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-config-data-default\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.731356 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.731422 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-kolla-config\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.731451 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-config-data-default\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.731563 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.731586 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.731621 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.731654 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.731673 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cchxt\" (UniqueName: \"kubernetes.io/projected/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-kube-api-access-cchxt\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.731959 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.732247 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.732514 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-config-data-default\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.733021 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-kolla-config\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.733202 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.736707 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.749572 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.749886 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cchxt\" (UniqueName: \"kubernetes.io/projected/8145b894-fd09-47c1-b9c2-0cb4cfa6d293-kube-api-access-cchxt\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.752087 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"8145b894-fd09-47c1-b9c2-0cb4cfa6d293\") " pod="openstack/openstack-galera-0" Nov 24 18:04:55 crc kubenswrapper[4768]: I1124 18:04:55.859919 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.675039 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.678621 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.681469 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.681691 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gsgkt" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.681939 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.683398 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.698710 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.745986 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/758c992e-f62f-4efd-af1d-0c1279d68544-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.746066 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/758c992e-f62f-4efd-af1d-0c1279d68544-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.746147 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7kpm\" (UniqueName: \"kubernetes.io/projected/758c992e-f62f-4efd-af1d-0c1279d68544-kube-api-access-b7kpm\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.746432 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758c992e-f62f-4efd-af1d-0c1279d68544-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.746826 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/758c992e-f62f-4efd-af1d-0c1279d68544-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.746981 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/758c992e-f62f-4efd-af1d-0c1279d68544-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.747023 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/758c992e-f62f-4efd-af1d-0c1279d68544-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.747220 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.848406 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/758c992e-f62f-4efd-af1d-0c1279d68544-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.848701 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/758c992e-f62f-4efd-af1d-0c1279d68544-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.848847 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/758c992e-f62f-4efd-af1d-0c1279d68544-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.848927 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.849060 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/758c992e-f62f-4efd-af1d-0c1279d68544-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.849102 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/758c992e-f62f-4efd-af1d-0c1279d68544-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.849199 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7kpm\" (UniqueName: \"kubernetes.io/projected/758c992e-f62f-4efd-af1d-0c1279d68544-kube-api-access-b7kpm\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.849280 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.849601 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/758c992e-f62f-4efd-af1d-0c1279d68544-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.849625 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758c992e-f62f-4efd-af1d-0c1279d68544-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.849876 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/758c992e-f62f-4efd-af1d-0c1279d68544-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.850119 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/758c992e-f62f-4efd-af1d-0c1279d68544-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.851552 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/758c992e-f62f-4efd-af1d-0c1279d68544-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.855997 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/758c992e-f62f-4efd-af1d-0c1279d68544-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.869954 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758c992e-f62f-4efd-af1d-0c1279d68544-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.873232 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7kpm\" (UniqueName: \"kubernetes.io/projected/758c992e-f62f-4efd-af1d-0c1279d68544-kube-api-access-b7kpm\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.881166 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"758c992e-f62f-4efd-af1d-0c1279d68544\") " pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.987290 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.988629 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.992152 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.992401 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qcqvq" Nov 24 18:04:56 crc kubenswrapper[4768]: I1124 18:04:56.992568 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.000216 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.007156 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.052980 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40180404-c438-415c-8787-05a1cc8461d0-config-data\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.053033 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj8wn\" (UniqueName: \"kubernetes.io/projected/40180404-c438-415c-8787-05a1cc8461d0-kube-api-access-pj8wn\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.053056 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40180404-c438-415c-8787-05a1cc8461d0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.053141 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/40180404-c438-415c-8787-05a1cc8461d0-kolla-config\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.053158 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/40180404-c438-415c-8787-05a1cc8461d0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.156340 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40180404-c438-415c-8787-05a1cc8461d0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.156473 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/40180404-c438-415c-8787-05a1cc8461d0-kolla-config\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.156520 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/40180404-c438-415c-8787-05a1cc8461d0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.156567 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40180404-c438-415c-8787-05a1cc8461d0-config-data\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.156599 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj8wn\" (UniqueName: \"kubernetes.io/projected/40180404-c438-415c-8787-05a1cc8461d0-kube-api-access-pj8wn\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.158788 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/40180404-c438-415c-8787-05a1cc8461d0-kolla-config\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.160193 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40180404-c438-415c-8787-05a1cc8461d0-config-data\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.161347 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/40180404-c438-415c-8787-05a1cc8461d0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.165854 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40180404-c438-415c-8787-05a1cc8461d0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.194937 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj8wn\" (UniqueName: \"kubernetes.io/projected/40180404-c438-415c-8787-05a1cc8461d0-kube-api-access-pj8wn\") pod \"memcached-0\" (UID: \"40180404-c438-415c-8787-05a1cc8461d0\") " pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.325566 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 18:04:57 crc kubenswrapper[4768]: I1124 18:04:57.341662 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" event={"ID":"f6eeedbc-36a6-4a1b-b879-be0c92682663","Type":"ContainerStarted","Data":"a40b56cf01806a585b96a00fea3948ebac05450f81b407e2dc89de7112ffe6c9"} Nov 24 18:04:58 crc kubenswrapper[4768]: I1124 18:04:58.162713 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 18:04:58 crc kubenswrapper[4768]: I1124 18:04:58.620313 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 18:04:58 crc kubenswrapper[4768]: I1124 18:04:58.621400 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 18:04:58 crc kubenswrapper[4768]: I1124 18:04:58.624065 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-rg9xn" Nov 24 18:04:58 crc kubenswrapper[4768]: I1124 18:04:58.632212 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 18:04:58 crc kubenswrapper[4768]: I1124 18:04:58.683795 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svc9z\" (UniqueName: \"kubernetes.io/projected/296e4b18-c3e3-481d-bad3-0c2427ca013b-kube-api-access-svc9z\") pod \"kube-state-metrics-0\" (UID: \"296e4b18-c3e3-481d-bad3-0c2427ca013b\") " pod="openstack/kube-state-metrics-0" Nov 24 18:04:58 crc kubenswrapper[4768]: I1124 18:04:58.785285 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svc9z\" (UniqueName: \"kubernetes.io/projected/296e4b18-c3e3-481d-bad3-0c2427ca013b-kube-api-access-svc9z\") pod \"kube-state-metrics-0\" (UID: \"296e4b18-c3e3-481d-bad3-0c2427ca013b\") " pod="openstack/kube-state-metrics-0" Nov 24 18:04:58 crc kubenswrapper[4768]: I1124 18:04:58.803374 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svc9z\" (UniqueName: \"kubernetes.io/projected/296e4b18-c3e3-481d-bad3-0c2427ca013b-kube-api-access-svc9z\") pod \"kube-state-metrics-0\" (UID: \"296e4b18-c3e3-481d-bad3-0c2427ca013b\") " pod="openstack/kube-state-metrics-0" Nov 24 18:04:58 crc kubenswrapper[4768]: I1124 18:04:58.944436 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 18:05:01 crc kubenswrapper[4768]: W1124 18:05:01.114503 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod758c992e_f62f_4efd_af1d_0c1279d68544.slice/crio-e3f55bf7a28935c262dadf2c54e0ac0c48f4b4e194ce0f1086b3e558e5bb62f2 WatchSource:0}: Error finding container e3f55bf7a28935c262dadf2c54e0ac0c48f4b4e194ce0f1086b3e558e5bb62f2: Status 404 returned error can't find the container with id e3f55bf7a28935c262dadf2c54e0ac0c48f4b4e194ce0f1086b3e558e5bb62f2 Nov 24 18:05:01 crc kubenswrapper[4768]: I1124 18:05:01.368246 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"758c992e-f62f-4efd-af1d-0c1279d68544","Type":"ContainerStarted","Data":"e3f55bf7a28935c262dadf2c54e0ac0c48f4b4e194ce0f1086b3e558e5bb62f2"} Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.722255 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zlg8p"] Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.724505 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.731884 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.732258 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-46tw6" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.732602 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.741351 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zlg8p"] Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.749999 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-xb8qp"] Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.753384 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.773902 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xb8qp"] Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.853162 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/509a2a18-bedf-4f92-bc91-608b5af92c1e-var-lib\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.853431 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/509a2a18-bedf-4f92-bc91-608b5af92c1e-scripts\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.853463 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/710c430d-b973-47b9-9917-2db7864f7570-ovn-controller-tls-certs\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.853500 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/710c430d-b973-47b9-9917-2db7864f7570-var-log-ovn\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.853517 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/509a2a18-bedf-4f92-bc91-608b5af92c1e-etc-ovs\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.853555 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/710c430d-b973-47b9-9917-2db7864f7570-var-run\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.853579 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxzlf\" (UniqueName: \"kubernetes.io/projected/509a2a18-bedf-4f92-bc91-608b5af92c1e-kube-api-access-fxzlf\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.853981 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/509a2a18-bedf-4f92-bc91-608b5af92c1e-var-run\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.854502 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/509a2a18-bedf-4f92-bc91-608b5af92c1e-var-log\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.854641 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/710c430d-b973-47b9-9917-2db7864f7570-var-run-ovn\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.854808 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/710c430d-b973-47b9-9917-2db7864f7570-scripts\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.854850 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgj2t\" (UniqueName: \"kubernetes.io/projected/710c430d-b973-47b9-9917-2db7864f7570-kube-api-access-vgj2t\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.854960 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/710c430d-b973-47b9-9917-2db7864f7570-combined-ca-bundle\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957050 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/710c430d-b973-47b9-9917-2db7864f7570-var-run\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957100 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxzlf\" (UniqueName: \"kubernetes.io/projected/509a2a18-bedf-4f92-bc91-608b5af92c1e-kube-api-access-fxzlf\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957139 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/509a2a18-bedf-4f92-bc91-608b5af92c1e-var-run\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/509a2a18-bedf-4f92-bc91-608b5af92c1e-var-log\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957194 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/710c430d-b973-47b9-9917-2db7864f7570-var-run-ovn\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957211 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/710c430d-b973-47b9-9917-2db7864f7570-scripts\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957231 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgj2t\" (UniqueName: \"kubernetes.io/projected/710c430d-b973-47b9-9917-2db7864f7570-kube-api-access-vgj2t\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957264 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/710c430d-b973-47b9-9917-2db7864f7570-combined-ca-bundle\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957292 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/509a2a18-bedf-4f92-bc91-608b5af92c1e-var-lib\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957316 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/710c430d-b973-47b9-9917-2db7864f7570-ovn-controller-tls-certs\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957334 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/509a2a18-bedf-4f92-bc91-608b5af92c1e-scripts\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957350 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/710c430d-b973-47b9-9917-2db7864f7570-var-log-ovn\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957372 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/509a2a18-bedf-4f92-bc91-608b5af92c1e-etc-ovs\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957705 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/710c430d-b973-47b9-9917-2db7864f7570-var-run\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957763 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/710c430d-b973-47b9-9917-2db7864f7570-var-run-ovn\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957775 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/509a2a18-bedf-4f92-bc91-608b5af92c1e-var-run\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.957928 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/509a2a18-bedf-4f92-bc91-608b5af92c1e-var-lib\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.958057 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/710c430d-b973-47b9-9917-2db7864f7570-var-log-ovn\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.958368 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/509a2a18-bedf-4f92-bc91-608b5af92c1e-var-log\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.958698 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/509a2a18-bedf-4f92-bc91-608b5af92c1e-etc-ovs\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.959956 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/710c430d-b973-47b9-9917-2db7864f7570-scripts\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.960113 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/509a2a18-bedf-4f92-bc91-608b5af92c1e-scripts\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.965961 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/710c430d-b973-47b9-9917-2db7864f7570-ovn-controller-tls-certs\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.978605 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxzlf\" (UniqueName: \"kubernetes.io/projected/509a2a18-bedf-4f92-bc91-608b5af92c1e-kube-api-access-fxzlf\") pod \"ovn-controller-ovs-xb8qp\" (UID: \"509a2a18-bedf-4f92-bc91-608b5af92c1e\") " pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.980472 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgj2t\" (UniqueName: \"kubernetes.io/projected/710c430d-b973-47b9-9917-2db7864f7570-kube-api-access-vgj2t\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:02 crc kubenswrapper[4768]: I1124 18:05:02.982429 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/710c430d-b973-47b9-9917-2db7864f7570-combined-ca-bundle\") pod \"ovn-controller-zlg8p\" (UID: \"710c430d-b973-47b9-9917-2db7864f7570\") " pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.054807 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.078736 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.615345 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.616889 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.621103 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-nm2m8" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.621125 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.621447 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.623308 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.624373 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.629703 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.770158 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c15f153b-967a-4edd-8c49-fd474a1d5de3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.770236 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15f153b-967a-4edd-8c49-fd474a1d5de3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.770330 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15f153b-967a-4edd-8c49-fd474a1d5de3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.770550 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcsj7\" (UniqueName: \"kubernetes.io/projected/c15f153b-967a-4edd-8c49-fd474a1d5de3-kube-api-access-zcsj7\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.770616 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15f153b-967a-4edd-8c49-fd474a1d5de3-config\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.770647 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15f153b-967a-4edd-8c49-fd474a1d5de3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.770686 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.770738 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c15f153b-967a-4edd-8c49-fd474a1d5de3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.871988 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15f153b-967a-4edd-8c49-fd474a1d5de3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.872043 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.872071 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c15f153b-967a-4edd-8c49-fd474a1d5de3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.872124 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c15f153b-967a-4edd-8c49-fd474a1d5de3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.872160 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15f153b-967a-4edd-8c49-fd474a1d5de3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.872181 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15f153b-967a-4edd-8c49-fd474a1d5de3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.872217 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcsj7\" (UniqueName: \"kubernetes.io/projected/c15f153b-967a-4edd-8c49-fd474a1d5de3-kube-api-access-zcsj7\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.872239 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15f153b-967a-4edd-8c49-fd474a1d5de3-config\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.872422 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.872611 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c15f153b-967a-4edd-8c49-fd474a1d5de3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.873157 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15f153b-967a-4edd-8c49-fd474a1d5de3-config\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.873510 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c15f153b-967a-4edd-8c49-fd474a1d5de3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.877082 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15f153b-967a-4edd-8c49-fd474a1d5de3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.878293 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15f153b-967a-4edd-8c49-fd474a1d5de3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.879549 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15f153b-967a-4edd-8c49-fd474a1d5de3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.888597 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcsj7\" (UniqueName: \"kubernetes.io/projected/c15f153b-967a-4edd-8c49-fd474a1d5de3-kube-api-access-zcsj7\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.898291 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c15f153b-967a-4edd-8c49-fd474a1d5de3\") " pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:03 crc kubenswrapper[4768]: I1124 18:05:03.962256 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: E1124 18:05:06.145346 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 18:05:06 crc kubenswrapper[4768]: E1124 18:05:06.145604 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 18:05:06 crc kubenswrapper[4768]: E1124 18:05:06.145790 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n74mt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-dj8sx_openstack(c2a56b62-a01e-4d84-bbb9-8efb01152dad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:05:06 crc kubenswrapper[4768]: E1124 18:05:06.145853 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6828,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-98ncd_openstack(0d52769e-02ab-4b7a-8df8-4448bf606f7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:05:06 crc kubenswrapper[4768]: E1124 18:05:06.147273 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" podUID="c2a56b62-a01e-4d84-bbb9-8efb01152dad" Nov 24 18:05:06 crc kubenswrapper[4768]: E1124 18:05:06.147330 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" podUID="0d52769e-02ab-4b7a-8df8-4448bf606f7f" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.522241 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.525412 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.527431 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-fsvzx" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.527592 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.527707 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.529000 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.536465 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.549132 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.629612 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prlwr\" (UniqueName: \"kubernetes.io/projected/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-kube-api-access-prlwr\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.629692 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.629734 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.629754 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.629799 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.629822 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.629844 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-config\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.629862 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.731548 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.731601 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prlwr\" (UniqueName: \"kubernetes.io/projected/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-kube-api-access-prlwr\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.731691 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.731754 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.731786 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.731847 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.731874 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.731899 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-config\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.733044 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-config\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.739370 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.739533 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.740291 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.762252 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.763196 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prlwr\" (UniqueName: \"kubernetes.io/projected/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-kube-api-access-prlwr\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.770192 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.784042 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5d5ef6-f6b9-4930-8426-a0718b3a754f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.805839 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4b5d5ef6-f6b9-4930-8426-a0718b3a754f\") " pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:06 crc kubenswrapper[4768]: I1124 18:05:06.919682 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.024615 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.042253 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9x98m"] Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.058773 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.078936 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.138406 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a56b62-a01e-4d84-bbb9-8efb01152dad-config\") pod \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\" (UID: \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\") " Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.138634 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n74mt\" (UniqueName: \"kubernetes.io/projected/c2a56b62-a01e-4d84-bbb9-8efb01152dad-kube-api-access-n74mt\") pod \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\" (UID: \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\") " Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.138665 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2a56b62-a01e-4d84-bbb9-8efb01152dad-dns-svc\") pod \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\" (UID: \"c2a56b62-a01e-4d84-bbb9-8efb01152dad\") " Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.141068 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2a56b62-a01e-4d84-bbb9-8efb01152dad-config" (OuterVolumeSpecName: "config") pod "c2a56b62-a01e-4d84-bbb9-8efb01152dad" (UID: "c2a56b62-a01e-4d84-bbb9-8efb01152dad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.142572 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2a56b62-a01e-4d84-bbb9-8efb01152dad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2a56b62-a01e-4d84-bbb9-8efb01152dad" (UID: "c2a56b62-a01e-4d84-bbb9-8efb01152dad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.148169 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2a56b62-a01e-4d84-bbb9-8efb01152dad-kube-api-access-n74mt" (OuterVolumeSpecName: "kube-api-access-n74mt") pod "c2a56b62-a01e-4d84-bbb9-8efb01152dad" (UID: "c2a56b62-a01e-4d84-bbb9-8efb01152dad"). InnerVolumeSpecName "kube-api-access-n74mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.194766 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.212589 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.222835 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zlg8p"] Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.235156 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.240039 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d52769e-02ab-4b7a-8df8-4448bf606f7f-config\") pod \"0d52769e-02ab-4b7a-8df8-4448bf606f7f\" (UID: \"0d52769e-02ab-4b7a-8df8-4448bf606f7f\") " Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.240192 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6828\" (UniqueName: \"kubernetes.io/projected/0d52769e-02ab-4b7a-8df8-4448bf606f7f-kube-api-access-l6828\") pod \"0d52769e-02ab-4b7a-8df8-4448bf606f7f\" (UID: \"0d52769e-02ab-4b7a-8df8-4448bf606f7f\") " Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.240572 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d52769e-02ab-4b7a-8df8-4448bf606f7f-config" (OuterVolumeSpecName: "config") pod "0d52769e-02ab-4b7a-8df8-4448bf606f7f" (UID: "0d52769e-02ab-4b7a-8df8-4448bf606f7f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.240605 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a56b62-a01e-4d84-bbb9-8efb01152dad-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.240624 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n74mt\" (UniqueName: \"kubernetes.io/projected/c2a56b62-a01e-4d84-bbb9-8efb01152dad-kube-api-access-n74mt\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.240637 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2a56b62-a01e-4d84-bbb9-8efb01152dad-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.246273 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d52769e-02ab-4b7a-8df8-4448bf606f7f-kube-api-access-l6828" (OuterVolumeSpecName: "kube-api-access-l6828") pod "0d52769e-02ab-4b7a-8df8-4448bf606f7f" (UID: "0d52769e-02ab-4b7a-8df8-4448bf606f7f"). InnerVolumeSpecName "kube-api-access-l6828". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.256294 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 18:05:07 crc kubenswrapper[4768]: W1124 18:05:07.258609 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod710c430d_b973_47b9_9917_2db7864f7570.slice/crio-7b71ee92f6af71895568593b1ee3d932bc9c92dbc7d355b6b5661ba47fd36997 WatchSource:0}: Error finding container 7b71ee92f6af71895568593b1ee3d932bc9c92dbc7d355b6b5661ba47fd36997: Status 404 returned error can't find the container with id 7b71ee92f6af71895568593b1ee3d932bc9c92dbc7d355b6b5661ba47fd36997 Nov 24 18:05:07 crc kubenswrapper[4768]: W1124 18:05:07.260383 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40180404_c438_415c_8787_05a1cc8461d0.slice/crio-ac94213d578b2615ef2627b594dce366e91d4cb893e3436e059aed02415c4141 WatchSource:0}: Error finding container ac94213d578b2615ef2627b594dce366e91d4cb893e3436e059aed02415c4141: Status 404 returned error can't find the container with id ac94213d578b2615ef2627b594dce366e91d4cb893e3436e059aed02415c4141 Nov 24 18:05:07 crc kubenswrapper[4768]: W1124 18:05:07.276889 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc15f153b_967a_4edd_8c49_fd474a1d5de3.slice/crio-8bd1ece7170d693a83befa52080866a2c8ff5d37a86061b6769c0023849b6f7c WatchSource:0}: Error finding container 8bd1ece7170d693a83befa52080866a2c8ff5d37a86061b6769c0023849b6f7c: Status 404 returned error can't find the container with id 8bd1ece7170d693a83befa52080866a2c8ff5d37a86061b6769c0023849b6f7c Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.342185 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6828\" (UniqueName: \"kubernetes.io/projected/0d52769e-02ab-4b7a-8df8-4448bf606f7f-kube-api-access-l6828\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.342218 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d52769e-02ab-4b7a-8df8-4448bf606f7f-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.348937 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xb8qp"] Nov 24 18:05:07 crc kubenswrapper[4768]: W1124 18:05:07.359693 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod509a2a18_bedf_4f92_bc91_608b5af92c1e.slice/crio-8d7c0b4e394a8f10fea8299c99ca802ea009627237a5214dcecf44b313894f97 WatchSource:0}: Error finding container 8d7c0b4e394a8f10fea8299c99ca802ea009627237a5214dcecf44b313894f97: Status 404 returned error can't find the container with id 8d7c0b4e394a8f10fea8299c99ca802ea009627237a5214dcecf44b313894f97 Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.420333 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xb8qp" event={"ID":"509a2a18-bedf-4f92-bc91-608b5af92c1e","Type":"ContainerStarted","Data":"8d7c0b4e394a8f10fea8299c99ca802ea009627237a5214dcecf44b313894f97"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.422270 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96e8147b-fab1-4601-b8c7-00764af14ba7","Type":"ContainerStarted","Data":"c65e5810d33f33b3c2f8a887ae2bf700b4b2eb2b9361687ff6bef594fa6f2a93"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.423611 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c15f153b-967a-4edd-8c49-fd474a1d5de3","Type":"ContainerStarted","Data":"8bd1ece7170d693a83befa52080866a2c8ff5d37a86061b6769c0023849b6f7c"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.425369 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" event={"ID":"c2a56b62-a01e-4d84-bbb9-8efb01152dad","Type":"ContainerDied","Data":"ec46f1afe0dbde5a594790a5232ae7a9e4bc9f72e14d5a0dcef00da4890e6a5b"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.425383 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-dj8sx" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.428587 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" event={"ID":"17f8bfef-8839-4f35-9fa1-fd55d683cfbf","Type":"ContainerStarted","Data":"e40d751e92a4ea546eeed701774bbd1d85744dd10dd2e360875b8493b386ed2a"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.428654 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" event={"ID":"17f8bfef-8839-4f35-9fa1-fd55d683cfbf","Type":"ContainerStarted","Data":"715bc9e626e0771d32a4d1203b4582a950029204cb8d26e47e86a097eda04d4d"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.431454 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zlg8p" event={"ID":"710c430d-b973-47b9-9917-2db7864f7570","Type":"ContainerStarted","Data":"7b71ee92f6af71895568593b1ee3d932bc9c92dbc7d355b6b5661ba47fd36997"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.434153 4768 generic.go:334] "Generic (PLEG): container finished" podID="f6eeedbc-36a6-4a1b-b879-be0c92682663" containerID="bb4d433793cfc64e4bfe85689d9a80f51f90462a6efa405656c8be12f3d73cfa" exitCode=0 Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.434302 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" event={"ID":"f6eeedbc-36a6-4a1b-b879-be0c92682663","Type":"ContainerDied","Data":"bb4d433793cfc64e4bfe85689d9a80f51f90462a6efa405656c8be12f3d73cfa"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.436045 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f67f41ac-4a1d-45c4-baaf-500062871fcb","Type":"ContainerStarted","Data":"f95e1bbdaeb935ca0649e2d67369388443e77579b144542c9afa98a356d06b35"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.437956 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" event={"ID":"0d52769e-02ab-4b7a-8df8-4448bf606f7f","Type":"ContainerDied","Data":"b2ffb004b43617e56bb3a8e42dd662c0f72ca3d32c565eded14187909881bb0d"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.438130 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-98ncd" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.445102 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"40180404-c438-415c-8787-05a1cc8461d0","Type":"ContainerStarted","Data":"ac94213d578b2615ef2627b594dce366e91d4cb893e3436e059aed02415c4141"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.453386 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"296e4b18-c3e3-481d-bad3-0c2427ca013b","Type":"ContainerStarted","Data":"57f3c6bdcfb5e435d3ebd065dfa54dd13798e30cac1734f79bdfccbcd2c96e5a"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.457335 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8145b894-fd09-47c1-b9c2-0cb4cfa6d293","Type":"ContainerStarted","Data":"1408a782ded7d3a9c9390626801981ebeef43d223d9232a467add5c399c49aff"} Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.536637 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dj8sx"] Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.548205 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-dj8sx"] Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.584722 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-98ncd"] Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.593524 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-98ncd"] Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.620870 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 18:05:07 crc kubenswrapper[4768]: E1124 18:05:07.632920 4768 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 24 18:05:07 crc kubenswrapper[4768]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/f6eeedbc-36a6-4a1b-b879-be0c92682663/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 24 18:05:07 crc kubenswrapper[4768]: > podSandboxID="a40b56cf01806a585b96a00fea3948ebac05450f81b407e2dc89de7112ffe6c9" Nov 24 18:05:07 crc kubenswrapper[4768]: E1124 18:05:07.633152 4768 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 24 18:05:07 crc kubenswrapper[4768]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-678z6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-gv6m8_openstack(f6eeedbc-36a6-4a1b-b879-be0c92682663): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/f6eeedbc-36a6-4a1b-b879-be0c92682663/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 24 18:05:07 crc kubenswrapper[4768]: > logger="UnhandledError" Nov 24 18:05:07 crc kubenswrapper[4768]: E1124 18:05:07.634731 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/f6eeedbc-36a6-4a1b-b879-be0c92682663/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" podUID="f6eeedbc-36a6-4a1b-b879-be0c92682663" Nov 24 18:05:07 crc kubenswrapper[4768]: W1124 18:05:07.676159 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b5d5ef6_f6b9_4930_8426_a0718b3a754f.slice/crio-0a91a7b4f4c7786f3e2e878cb6befe0657ef30ffc40a0f34228d7fdddec69967 WatchSource:0}: Error finding container 0a91a7b4f4c7786f3e2e878cb6befe0657ef30ffc40a0f34228d7fdddec69967: Status 404 returned error can't find the container with id 0a91a7b4f4c7786f3e2e878cb6befe0657ef30ffc40a0f34228d7fdddec69967 Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.937953 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d52769e-02ab-4b7a-8df8-4448bf606f7f" path="/var/lib/kubelet/pods/0d52769e-02ab-4b7a-8df8-4448bf606f7f/volumes" Nov 24 18:05:07 crc kubenswrapper[4768]: I1124 18:05:07.938809 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2a56b62-a01e-4d84-bbb9-8efb01152dad" path="/var/lib/kubelet/pods/c2a56b62-a01e-4d84-bbb9-8efb01152dad/volumes" Nov 24 18:05:08 crc kubenswrapper[4768]: I1124 18:05:08.467952 4768 generic.go:334] "Generic (PLEG): container finished" podID="17f8bfef-8839-4f35-9fa1-fd55d683cfbf" containerID="e40d751e92a4ea546eeed701774bbd1d85744dd10dd2e360875b8493b386ed2a" exitCode=0 Nov 24 18:05:08 crc kubenswrapper[4768]: I1124 18:05:08.468046 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" event={"ID":"17f8bfef-8839-4f35-9fa1-fd55d683cfbf","Type":"ContainerDied","Data":"e40d751e92a4ea546eeed701774bbd1d85744dd10dd2e360875b8493b386ed2a"} Nov 24 18:05:08 crc kubenswrapper[4768]: I1124 18:05:08.470717 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4b5d5ef6-f6b9-4930-8426-a0718b3a754f","Type":"ContainerStarted","Data":"0a91a7b4f4c7786f3e2e878cb6befe0657ef30ffc40a0f34228d7fdddec69967"} Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.664004 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-f9558"] Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.669199 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.670466 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-f9558"] Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.671838 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.698905 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.698989 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-config\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.699039 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-ovs-rundir\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.699054 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-ovn-rundir\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.700582 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6dgr\" (UniqueName: \"kubernetes.io/projected/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-kube-api-access-d6dgr\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.700724 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-combined-ca-bundle\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.798081 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9x98m"] Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.801688 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-combined-ca-bundle\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.801769 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.801808 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-config\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.801843 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-ovs-rundir\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.801859 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-ovn-rundir\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.801915 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6dgr\" (UniqueName: \"kubernetes.io/projected/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-kube-api-access-d6dgr\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.802898 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-ovs-rundir\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.802923 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-ovn-rundir\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.804319 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-config\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.807439 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.810103 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-combined-ca-bundle\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.849631 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kglqz"] Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.857877 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.864444 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6dgr\" (UniqueName: \"kubernetes.io/projected/2beabb7a-c951-4e24-8a6e-83ceb0ebb087-kube-api-access-d6dgr\") pod \"ovn-controller-metrics-f9558\" (UID: \"2beabb7a-c951-4e24-8a6e-83ceb0ebb087\") " pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.864835 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.880305 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kglqz"] Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.903627 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kglqz\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.903704 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bhfm\" (UniqueName: \"kubernetes.io/projected/40c17639-b9a9-4576-9bea-30e780d4580d-kube-api-access-4bhfm\") pod \"dnsmasq-dns-7fd796d7df-kglqz\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.903761 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kglqz\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.903922 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-config\") pod \"dnsmasq-dns-7fd796d7df-kglqz\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.957568 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gv6m8"] Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.984224 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cfpv6"] Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.985795 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.988525 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.993789 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-f9558" Nov 24 18:05:10 crc kubenswrapper[4768]: I1124 18:05:10.998593 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cfpv6"] Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.005659 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kglqz\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.005736 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-config\") pod \"dnsmasq-dns-7fd796d7df-kglqz\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.005761 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kglqz\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.006420 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bhfm\" (UniqueName: \"kubernetes.io/projected/40c17639-b9a9-4576-9bea-30e780d4580d-kube-api-access-4bhfm\") pod \"dnsmasq-dns-7fd796d7df-kglqz\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.009288 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kglqz\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.009867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-config\") pod \"dnsmasq-dns-7fd796d7df-kglqz\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.010440 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kglqz\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.029683 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bhfm\" (UniqueName: \"kubernetes.io/projected/40c17639-b9a9-4576-9bea-30e780d4580d-kube-api-access-4bhfm\") pod \"dnsmasq-dns-7fd796d7df-kglqz\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.108062 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w6vk\" (UniqueName: \"kubernetes.io/projected/fd223bd5-4be2-4240-bd86-a72e479be131-kube-api-access-8w6vk\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.108180 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.108356 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.108389 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.108464 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-config\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.209642 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.209685 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.209724 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-config\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.209788 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w6vk\" (UniqueName: \"kubernetes.io/projected/fd223bd5-4be2-4240-bd86-a72e479be131-kube-api-access-8w6vk\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.209817 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.210893 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.210957 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.211002 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.211042 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-config\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.211062 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.229777 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w6vk\" (UniqueName: \"kubernetes.io/projected/fd223bd5-4be2-4240-bd86-a72e479be131-kube-api-access-8w6vk\") pod \"dnsmasq-dns-86db49b7ff-cfpv6\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:11 crc kubenswrapper[4768]: I1124 18:05:11.313960 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:13 crc kubenswrapper[4768]: I1124 18:05:13.655844 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:05:13 crc kubenswrapper[4768]: I1124 18:05:13.656192 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:05:14 crc kubenswrapper[4768]: I1124 18:05:14.755455 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cfpv6"] Nov 24 18:05:14 crc kubenswrapper[4768]: I1124 18:05:14.858284 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kglqz"] Nov 24 18:05:14 crc kubenswrapper[4768]: W1124 18:05:14.874567 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd223bd5_4be2_4240_bd86_a72e479be131.slice/crio-e426b643b3b110bf667d015810eea610e897533d3f5b28ed62dcbc8e7dbcf216 WatchSource:0}: Error finding container e426b643b3b110bf667d015810eea610e897533d3f5b28ed62dcbc8e7dbcf216: Status 404 returned error can't find the container with id e426b643b3b110bf667d015810eea610e897533d3f5b28ed62dcbc8e7dbcf216 Nov 24 18:05:14 crc kubenswrapper[4768]: W1124 18:05:14.877536 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40c17639_b9a9_4576_9bea_30e780d4580d.slice/crio-bf585357c64d8203a5b18f69d2fe5b24b8be3edf555a53a4f78415f3d16e90a4 WatchSource:0}: Error finding container bf585357c64d8203a5b18f69d2fe5b24b8be3edf555a53a4f78415f3d16e90a4: Status 404 returned error can't find the container with id bf585357c64d8203a5b18f69d2fe5b24b8be3edf555a53a4f78415f3d16e90a4 Nov 24 18:05:14 crc kubenswrapper[4768]: I1124 18:05:14.948937 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-f9558"] Nov 24 18:05:15 crc kubenswrapper[4768]: W1124 18:05:15.066626 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2beabb7a_c951_4e24_8a6e_83ceb0ebb087.slice/crio-6323a630ed986f4821d429cc8f1aad118281d7fcec03b35568b2931cace8a5a8 WatchSource:0}: Error finding container 6323a630ed986f4821d429cc8f1aad118281d7fcec03b35568b2931cace8a5a8: Status 404 returned error can't find the container with id 6323a630ed986f4821d429cc8f1aad118281d7fcec03b35568b2931cace8a5a8 Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.520667 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xb8qp" event={"ID":"509a2a18-bedf-4f92-bc91-608b5af92c1e","Type":"ContainerStarted","Data":"6d42147446c202afdf1f45a881a52eb6ca921483116d512bf120e7c2f1fb9184"} Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.524056 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"40180404-c438-415c-8787-05a1cc8461d0","Type":"ContainerStarted","Data":"b27cbe0a3618ec9693d99c059c185e3bb600425f106373cb25a4907fa611a3fe"} Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.524692 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.528253 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-f9558" event={"ID":"2beabb7a-c951-4e24-8a6e-83ceb0ebb087","Type":"ContainerStarted","Data":"6323a630ed986f4821d429cc8f1aad118281d7fcec03b35568b2931cace8a5a8"} Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.530463 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" event={"ID":"fd223bd5-4be2-4240-bd86-a72e479be131","Type":"ContainerStarted","Data":"e426b643b3b110bf667d015810eea610e897533d3f5b28ed62dcbc8e7dbcf216"} Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.535201 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" event={"ID":"17f8bfef-8839-4f35-9fa1-fd55d683cfbf","Type":"ContainerStarted","Data":"d1e768d293f454e400f3723e84ff2e18c37fcdb9504512a885e63e97476e36cc"} Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.535368 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" podUID="17f8bfef-8839-4f35-9fa1-fd55d683cfbf" containerName="dnsmasq-dns" containerID="cri-o://d1e768d293f454e400f3723e84ff2e18c37fcdb9504512a885e63e97476e36cc" gracePeriod=10 Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.535648 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.538769 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" event={"ID":"f6eeedbc-36a6-4a1b-b879-be0c92682663","Type":"ContainerStarted","Data":"392e371c78af459e6eb5e34f30524ffceda063d513891078f8d935948772fc79"} Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.538910 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" podUID="f6eeedbc-36a6-4a1b-b879-be0c92682663" containerName="dnsmasq-dns" containerID="cri-o://392e371c78af459e6eb5e34f30524ffceda063d513891078f8d935948772fc79" gracePeriod=10 Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.538978 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.547654 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" event={"ID":"40c17639-b9a9-4576-9bea-30e780d4580d","Type":"ContainerStarted","Data":"bf585357c64d8203a5b18f69d2fe5b24b8be3edf555a53a4f78415f3d16e90a4"} Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.563777 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.842264852 podStartE2EDuration="19.563758638s" podCreationTimestamp="2025-11-24 18:04:56 +0000 UTC" firstStartedPulling="2025-11-24 18:05:07.264051702 +0000 UTC m=+946.124633479" lastFinishedPulling="2025-11-24 18:05:13.985545478 +0000 UTC m=+952.846127265" observedRunningTime="2025-11-24 18:05:15.559478736 +0000 UTC m=+954.420060523" watchObservedRunningTime="2025-11-24 18:05:15.563758638 +0000 UTC m=+954.424340435" Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.579474 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" podStartSLOduration=14.488999094 podStartE2EDuration="23.579448258s" podCreationTimestamp="2025-11-24 18:04:52 +0000 UTC" firstStartedPulling="2025-11-24 18:04:57.166340398 +0000 UTC m=+936.026922175" lastFinishedPulling="2025-11-24 18:05:06.256789562 +0000 UTC m=+945.117371339" observedRunningTime="2025-11-24 18:05:15.5764823 +0000 UTC m=+954.437064077" watchObservedRunningTime="2025-11-24 18:05:15.579448258 +0000 UTC m=+954.440030035" Nov 24 18:05:15 crc kubenswrapper[4768]: I1124 18:05:15.594019 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" podStartSLOduration=23.594000417 podStartE2EDuration="23.594000417s" podCreationTimestamp="2025-11-24 18:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:05:15.590842695 +0000 UTC m=+954.451424462" watchObservedRunningTime="2025-11-24 18:05:15.594000417 +0000 UTC m=+954.454582194" Nov 24 18:05:16 crc kubenswrapper[4768]: I1124 18:05:16.556462 4768 generic.go:334] "Generic (PLEG): container finished" podID="f6eeedbc-36a6-4a1b-b879-be0c92682663" containerID="392e371c78af459e6eb5e34f30524ffceda063d513891078f8d935948772fc79" exitCode=0 Nov 24 18:05:16 crc kubenswrapper[4768]: I1124 18:05:16.556528 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" event={"ID":"f6eeedbc-36a6-4a1b-b879-be0c92682663","Type":"ContainerDied","Data":"392e371c78af459e6eb5e34f30524ffceda063d513891078f8d935948772fc79"} Nov 24 18:05:16 crc kubenswrapper[4768]: I1124 18:05:16.556868 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" event={"ID":"f6eeedbc-36a6-4a1b-b879-be0c92682663","Type":"ContainerDied","Data":"a40b56cf01806a585b96a00fea3948ebac05450f81b407e2dc89de7112ffe6c9"} Nov 24 18:05:16 crc kubenswrapper[4768]: I1124 18:05:16.556888 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a40b56cf01806a585b96a00fea3948ebac05450f81b407e2dc89de7112ffe6c9" Nov 24 18:05:16 crc kubenswrapper[4768]: I1124 18:05:16.558159 4768 generic.go:334] "Generic (PLEG): container finished" podID="509a2a18-bedf-4f92-bc91-608b5af92c1e" containerID="6d42147446c202afdf1f45a881a52eb6ca921483116d512bf120e7c2f1fb9184" exitCode=0 Nov 24 18:05:16 crc kubenswrapper[4768]: I1124 18:05:16.558267 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xb8qp" event={"ID":"509a2a18-bedf-4f92-bc91-608b5af92c1e","Type":"ContainerDied","Data":"6d42147446c202afdf1f45a881a52eb6ca921483116d512bf120e7c2f1fb9184"} Nov 24 18:05:16 crc kubenswrapper[4768]: I1124 18:05:16.566712 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"758c992e-f62f-4efd-af1d-0c1279d68544","Type":"ContainerStarted","Data":"72c591b5726e9ab14e443c73957cb22f91aff94e67ab7fee38f29a31406b8367"} Nov 24 18:05:16 crc kubenswrapper[4768]: I1124 18:05:16.569298 4768 generic.go:334] "Generic (PLEG): container finished" podID="17f8bfef-8839-4f35-9fa1-fd55d683cfbf" containerID="d1e768d293f454e400f3723e84ff2e18c37fcdb9504512a885e63e97476e36cc" exitCode=0 Nov 24 18:05:16 crc kubenswrapper[4768]: I1124 18:05:16.569447 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" event={"ID":"17f8bfef-8839-4f35-9fa1-fd55d683cfbf","Type":"ContainerDied","Data":"d1e768d293f454e400f3723e84ff2e18c37fcdb9504512a885e63e97476e36cc"} Nov 24 18:05:16 crc kubenswrapper[4768]: I1124 18:05:16.838723 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.006532 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eeedbc-36a6-4a1b-b879-be0c92682663-config\") pod \"f6eeedbc-36a6-4a1b-b879-be0c92682663\" (UID: \"f6eeedbc-36a6-4a1b-b879-be0c92682663\") " Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.006691 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6eeedbc-36a6-4a1b-b879-be0c92682663-dns-svc\") pod \"f6eeedbc-36a6-4a1b-b879-be0c92682663\" (UID: \"f6eeedbc-36a6-4a1b-b879-be0c92682663\") " Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.006973 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-678z6\" (UniqueName: \"kubernetes.io/projected/f6eeedbc-36a6-4a1b-b879-be0c92682663-kube-api-access-678z6\") pod \"f6eeedbc-36a6-4a1b-b879-be0c92682663\" (UID: \"f6eeedbc-36a6-4a1b-b879-be0c92682663\") " Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.014369 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6eeedbc-36a6-4a1b-b879-be0c92682663-kube-api-access-678z6" (OuterVolumeSpecName: "kube-api-access-678z6") pod "f6eeedbc-36a6-4a1b-b879-be0c92682663" (UID: "f6eeedbc-36a6-4a1b-b879-be0c92682663"). InnerVolumeSpecName "kube-api-access-678z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.048927 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6eeedbc-36a6-4a1b-b879-be0c92682663-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6eeedbc-36a6-4a1b-b879-be0c92682663" (UID: "f6eeedbc-36a6-4a1b-b879-be0c92682663"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.057066 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6eeedbc-36a6-4a1b-b879-be0c92682663-config" (OuterVolumeSpecName: "config") pod "f6eeedbc-36a6-4a1b-b879-be0c92682663" (UID: "f6eeedbc-36a6-4a1b-b879-be0c92682663"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.106205 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.110683 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eeedbc-36a6-4a1b-b879-be0c92682663-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.110733 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6eeedbc-36a6-4a1b-b879-be0c92682663-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.110746 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-678z6\" (UniqueName: \"kubernetes.io/projected/f6eeedbc-36a6-4a1b-b879-be0c92682663-kube-api-access-678z6\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.212236 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-dns-svc\") pod \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\" (UID: \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\") " Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.212341 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-config\") pod \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\" (UID: \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\") " Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.212395 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f2td\" (UniqueName: \"kubernetes.io/projected/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-kube-api-access-8f2td\") pod \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\" (UID: \"17f8bfef-8839-4f35-9fa1-fd55d683cfbf\") " Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.220994 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-kube-api-access-8f2td" (OuterVolumeSpecName: "kube-api-access-8f2td") pod "17f8bfef-8839-4f35-9fa1-fd55d683cfbf" (UID: "17f8bfef-8839-4f35-9fa1-fd55d683cfbf"). InnerVolumeSpecName "kube-api-access-8f2td". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.314438 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f2td\" (UniqueName: \"kubernetes.io/projected/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-kube-api-access-8f2td\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.371596 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-config" (OuterVolumeSpecName: "config") pod "17f8bfef-8839-4f35-9fa1-fd55d683cfbf" (UID: "17f8bfef-8839-4f35-9fa1-fd55d683cfbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.382566 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17f8bfef-8839-4f35-9fa1-fd55d683cfbf" (UID: "17f8bfef-8839-4f35-9fa1-fd55d683cfbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.416135 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.416173 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f8bfef-8839-4f35-9fa1-fd55d683cfbf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.579210 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f67f41ac-4a1d-45c4-baaf-500062871fcb","Type":"ContainerStarted","Data":"c79650b18ecd2360097a631b234f9877cee7111a7b8d25423597bf6bc329515b"} Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.581107 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"296e4b18-c3e3-481d-bad3-0c2427ca013b","Type":"ContainerStarted","Data":"52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0"} Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.581180 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.583887 4768 generic.go:334] "Generic (PLEG): container finished" podID="fd223bd5-4be2-4240-bd86-a72e479be131" containerID="8396cb9c173bd3c83a97029fc446cbfbcff303606f3c4c0551d36cd572cb3622" exitCode=0 Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.584115 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" event={"ID":"fd223bd5-4be2-4240-bd86-a72e479be131","Type":"ContainerDied","Data":"8396cb9c173bd3c83a97029fc446cbfbcff303606f3c4c0551d36cd572cb3622"} Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.587296 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.587932 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9x98m" event={"ID":"17f8bfef-8839-4f35-9fa1-fd55d683cfbf","Type":"ContainerDied","Data":"715bc9e626e0771d32a4d1203b4582a950029204cb8d26e47e86a097eda04d4d"} Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.587979 4768 scope.go:117] "RemoveContainer" containerID="d1e768d293f454e400f3723e84ff2e18c37fcdb9504512a885e63e97476e36cc" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.590383 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zlg8p" event={"ID":"710c430d-b973-47b9-9917-2db7864f7570","Type":"ContainerStarted","Data":"58a5a110c48e0ba15ea36c8eef05cc51341918914fdd01d0cb4212ab468974ba"} Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.591283 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.593236 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4b5d5ef6-f6b9-4930-8426-a0718b3a754f","Type":"ContainerStarted","Data":"a519c7baec3aeccf3dd520c996dba2cb8a574026a760d618b90f71c306ef032b"} Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.595425 4768 generic.go:334] "Generic (PLEG): container finished" podID="40c17639-b9a9-4576-9bea-30e780d4580d" containerID="10236cd7c2a276e58197636e0dd7c07970d2ae1a62b65cd0dddfa4c6fcdd9131" exitCode=0 Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.595480 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" event={"ID":"40c17639-b9a9-4576-9bea-30e780d4580d","Type":"ContainerDied","Data":"10236cd7c2a276e58197636e0dd7c07970d2ae1a62b65cd0dddfa4c6fcdd9131"} Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.598795 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xb8qp" event={"ID":"509a2a18-bedf-4f92-bc91-608b5af92c1e","Type":"ContainerStarted","Data":"c34f7abbf5a9308bc0097110518b2a63716877d8e7ce9538de1a5f223a7ecf98"} Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.602849 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8145b894-fd09-47c1-b9c2-0cb4cfa6d293","Type":"ContainerStarted","Data":"c7c43da9eaf9cf13dd6a636bcfb2becabe53ef8ee706ea25685953dff828761c"} Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.609823 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c15f153b-967a-4edd-8c49-fd474a1d5de3","Type":"ContainerStarted","Data":"dd4ccdd33229721cc0102d00ad6c9ba26c6650028067c35818a69d55b58400b4"} Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.609835 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gv6m8" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.620774 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zlg8p" podStartSLOduration=8.434937189 podStartE2EDuration="15.620741184s" podCreationTimestamp="2025-11-24 18:05:02 +0000 UTC" firstStartedPulling="2025-11-24 18:05:07.261039403 +0000 UTC m=+946.121621180" lastFinishedPulling="2025-11-24 18:05:14.446843408 +0000 UTC m=+953.307425175" observedRunningTime="2025-11-24 18:05:17.616281958 +0000 UTC m=+956.476863735" watchObservedRunningTime="2025-11-24 18:05:17.620741184 +0000 UTC m=+956.481322961" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.648407 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.558948278 podStartE2EDuration="19.648388426s" podCreationTimestamp="2025-11-24 18:04:58 +0000 UTC" firstStartedPulling="2025-11-24 18:05:07.223269027 +0000 UTC m=+946.083850804" lastFinishedPulling="2025-11-24 18:05:16.312709165 +0000 UTC m=+955.173290952" observedRunningTime="2025-11-24 18:05:17.646149447 +0000 UTC m=+956.506731224" watchObservedRunningTime="2025-11-24 18:05:17.648388426 +0000 UTC m=+956.508970203" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.718094 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gv6m8"] Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.723868 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gv6m8"] Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.729653 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9x98m"] Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.734588 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9x98m"] Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.908402 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f8bfef-8839-4f35-9fa1-fd55d683cfbf" path="/var/lib/kubelet/pods/17f8bfef-8839-4f35-9fa1-fd55d683cfbf/volumes" Nov 24 18:05:17 crc kubenswrapper[4768]: I1124 18:05:17.909146 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6eeedbc-36a6-4a1b-b879-be0c92682663" path="/var/lib/kubelet/pods/f6eeedbc-36a6-4a1b-b879-be0c92682663/volumes" Nov 24 18:05:18 crc kubenswrapper[4768]: I1124 18:05:18.445939 4768 scope.go:117] "RemoveContainer" containerID="e40d751e92a4ea546eeed701774bbd1d85744dd10dd2e360875b8493b386ed2a" Nov 24 18:05:18 crc kubenswrapper[4768]: I1124 18:05:18.626133 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96e8147b-fab1-4601-b8c7-00764af14ba7","Type":"ContainerStarted","Data":"a4aa0bb200172f83176cd90f33b02eadaee041ecd11044f7965416b7cf3adf3d"} Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.637786 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xb8qp" event={"ID":"509a2a18-bedf-4f92-bc91-608b5af92c1e","Type":"ContainerStarted","Data":"2039d6d484bebeaacd9dac1239c4216cb9030b9fe89c74d2b9b6f687ba728774"} Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.638405 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.641969 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-f9558" event={"ID":"2beabb7a-c951-4e24-8a6e-83ceb0ebb087","Type":"ContainerStarted","Data":"e915fb9182a02a75dbbf981bc066ece77a507e3c5b7649c51b9d02509fc120a5"} Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.644701 4768 generic.go:334] "Generic (PLEG): container finished" podID="758c992e-f62f-4efd-af1d-0c1279d68544" containerID="72c591b5726e9ab14e443c73957cb22f91aff94e67ab7fee38f29a31406b8367" exitCode=0 Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.644786 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"758c992e-f62f-4efd-af1d-0c1279d68544","Type":"ContainerDied","Data":"72c591b5726e9ab14e443c73957cb22f91aff94e67ab7fee38f29a31406b8367"} Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.651709 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c15f153b-967a-4edd-8c49-fd474a1d5de3","Type":"ContainerStarted","Data":"a283815c04c8a3051ab8495c38139413fe2658e14c56f16b9f216536414b3998"} Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.656075 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" event={"ID":"fd223bd5-4be2-4240-bd86-a72e479be131","Type":"ContainerStarted","Data":"9670aca0447e91bed48b8acb8636d7b8a53952ca3b86abc67ce05de9ccd1308c"} Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.656509 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.676372 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4b5d5ef6-f6b9-4930-8426-a0718b3a754f","Type":"ContainerStarted","Data":"7f2ec5364cc93651acb676ffa65997b490509b400609a4822d2c234208e7f877"} Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.679226 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-xb8qp" podStartSLOduration=10.714514336 podStartE2EDuration="17.679198989s" podCreationTimestamp="2025-11-24 18:05:02 +0000 UTC" firstStartedPulling="2025-11-24 18:05:07.362190773 +0000 UTC m=+946.222772540" lastFinishedPulling="2025-11-24 18:05:14.326875416 +0000 UTC m=+953.187457193" observedRunningTime="2025-11-24 18:05:19.662356049 +0000 UTC m=+958.522937866" watchObservedRunningTime="2025-11-24 18:05:19.679198989 +0000 UTC m=+958.539780776" Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.683168 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" event={"ID":"40c17639-b9a9-4576-9bea-30e780d4580d","Type":"ContainerStarted","Data":"19774061fab14c7dfec003e8e85db8b371d1567e91d46c3aa6cb31d64acb1940"} Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.683813 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.720779 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-f9558" podStartSLOduration=6.079622921 podStartE2EDuration="9.720761183s" podCreationTimestamp="2025-11-24 18:05:10 +0000 UTC" firstStartedPulling="2025-11-24 18:05:15.071799058 +0000 UTC m=+953.932380855" lastFinishedPulling="2025-11-24 18:05:18.71293734 +0000 UTC m=+957.573519117" observedRunningTime="2025-11-24 18:05:19.719541642 +0000 UTC m=+958.580123419" watchObservedRunningTime="2025-11-24 18:05:19.720761183 +0000 UTC m=+958.581342960" Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.752207 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.125812973 podStartE2EDuration="17.752182054s" podCreationTimestamp="2025-11-24 18:05:02 +0000 UTC" firstStartedPulling="2025-11-24 18:05:07.27856117 +0000 UTC m=+946.139142947" lastFinishedPulling="2025-11-24 18:05:18.904930251 +0000 UTC m=+957.765512028" observedRunningTime="2025-11-24 18:05:19.749251257 +0000 UTC m=+958.609833034" watchObservedRunningTime="2025-11-24 18:05:19.752182054 +0000 UTC m=+958.612763831" Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.777836 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" podStartSLOduration=9.777814493 podStartE2EDuration="9.777814493s" podCreationTimestamp="2025-11-24 18:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:05:19.765825859 +0000 UTC m=+958.626407636" watchObservedRunningTime="2025-11-24 18:05:19.777814493 +0000 UTC m=+958.638396280" Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.793889 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.72077822 podStartE2EDuration="14.793870791s" podCreationTimestamp="2025-11-24 18:05:05 +0000 UTC" firstStartedPulling="2025-11-24 18:05:07.679228377 +0000 UTC m=+946.539810154" lastFinishedPulling="2025-11-24 18:05:18.752320948 +0000 UTC m=+957.612902725" observedRunningTime="2025-11-24 18:05:19.78998077 +0000 UTC m=+958.650562557" watchObservedRunningTime="2025-11-24 18:05:19.793870791 +0000 UTC m=+958.654452558" Nov 24 18:05:19 crc kubenswrapper[4768]: I1124 18:05:19.812651 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" podStartSLOduration=9.812628301 podStartE2EDuration="9.812628301s" podCreationTimestamp="2025-11-24 18:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:05:19.810212018 +0000 UTC m=+958.670793795" watchObservedRunningTime="2025-11-24 18:05:19.812628301 +0000 UTC m=+958.673210078" Nov 24 18:05:20 crc kubenswrapper[4768]: I1124 18:05:20.693015 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"758c992e-f62f-4efd-af1d-0c1279d68544","Type":"ContainerStarted","Data":"c00d05d6870a22a6fd1767ce7381d3dd7c8be677bc04606ea9194cbe6ffa89cb"} Nov 24 18:05:20 crc kubenswrapper[4768]: I1124 18:05:20.694922 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:20 crc kubenswrapper[4768]: I1124 18:05:20.722615 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=12.484376231 podStartE2EDuration="25.72259028s" podCreationTimestamp="2025-11-24 18:04:55 +0000 UTC" firstStartedPulling="2025-11-24 18:05:01.117078999 +0000 UTC m=+939.977660776" lastFinishedPulling="2025-11-24 18:05:14.355293048 +0000 UTC m=+953.215874825" observedRunningTime="2025-11-24 18:05:20.713611936 +0000 UTC m=+959.574193723" watchObservedRunningTime="2025-11-24 18:05:20.72259028 +0000 UTC m=+959.583172067" Nov 24 18:05:21 crc kubenswrapper[4768]: I1124 18:05:21.706599 4768 generic.go:334] "Generic (PLEG): container finished" podID="8145b894-fd09-47c1-b9c2-0cb4cfa6d293" containerID="c7c43da9eaf9cf13dd6a636bcfb2becabe53ef8ee706ea25685953dff828761c" exitCode=0 Nov 24 18:05:21 crc kubenswrapper[4768]: I1124 18:05:21.706745 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8145b894-fd09-47c1-b9c2-0cb4cfa6d293","Type":"ContainerDied","Data":"c7c43da9eaf9cf13dd6a636bcfb2becabe53ef8ee706ea25685953dff828761c"} Nov 24 18:05:21 crc kubenswrapper[4768]: I1124 18:05:21.920654 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:21 crc kubenswrapper[4768]: I1124 18:05:21.920856 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:21 crc kubenswrapper[4768]: I1124 18:05:21.962865 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:21 crc kubenswrapper[4768]: I1124 18:05:21.965020 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:22 crc kubenswrapper[4768]: I1124 18:05:22.015362 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:22 crc kubenswrapper[4768]: I1124 18:05:22.326839 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 18:05:22 crc kubenswrapper[4768]: I1124 18:05:22.714065 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:22 crc kubenswrapper[4768]: I1124 18:05:22.752186 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 18:05:22 crc kubenswrapper[4768]: I1124 18:05:22.752448 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.088095 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 18:05:23 crc kubenswrapper[4768]: E1124 18:05:23.088427 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f8bfef-8839-4f35-9fa1-fd55d683cfbf" containerName="init" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.088444 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f8bfef-8839-4f35-9fa1-fd55d683cfbf" containerName="init" Nov 24 18:05:23 crc kubenswrapper[4768]: E1124 18:05:23.088455 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6eeedbc-36a6-4a1b-b879-be0c92682663" containerName="init" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.088462 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6eeedbc-36a6-4a1b-b879-be0c92682663" containerName="init" Nov 24 18:05:23 crc kubenswrapper[4768]: E1124 18:05:23.088476 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6eeedbc-36a6-4a1b-b879-be0c92682663" containerName="dnsmasq-dns" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.088536 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6eeedbc-36a6-4a1b-b879-be0c92682663" containerName="dnsmasq-dns" Nov 24 18:05:23 crc kubenswrapper[4768]: E1124 18:05:23.088551 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f8bfef-8839-4f35-9fa1-fd55d683cfbf" containerName="dnsmasq-dns" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.088559 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f8bfef-8839-4f35-9fa1-fd55d683cfbf" containerName="dnsmasq-dns" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.088751 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f8bfef-8839-4f35-9fa1-fd55d683cfbf" containerName="dnsmasq-dns" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.088765 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6eeedbc-36a6-4a1b-b879-be0c92682663" containerName="dnsmasq-dns" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.089607 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.092006 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.092013 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-l2ll5" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.092653 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.093452 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.104533 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.216201 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/09191ff5-4686-4243-a0b4-3dd710ead568-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.216258 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8v6p\" (UniqueName: \"kubernetes.io/projected/09191ff5-4686-4243-a0b4-3dd710ead568-kube-api-access-s8v6p\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.216289 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09191ff5-4686-4243-a0b4-3dd710ead568-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.216335 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09191ff5-4686-4243-a0b4-3dd710ead568-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.216503 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09191ff5-4686-4243-a0b4-3dd710ead568-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.216646 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09191ff5-4686-4243-a0b4-3dd710ead568-scripts\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.216709 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09191ff5-4686-4243-a0b4-3dd710ead568-config\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.317905 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09191ff5-4686-4243-a0b4-3dd710ead568-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.318382 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09191ff5-4686-4243-a0b4-3dd710ead568-scripts\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.318439 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09191ff5-4686-4243-a0b4-3dd710ead568-config\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.318516 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/09191ff5-4686-4243-a0b4-3dd710ead568-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.318564 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8v6p\" (UniqueName: \"kubernetes.io/projected/09191ff5-4686-4243-a0b4-3dd710ead568-kube-api-access-s8v6p\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.318601 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09191ff5-4686-4243-a0b4-3dd710ead568-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.318635 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09191ff5-4686-4243-a0b4-3dd710ead568-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.319255 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09191ff5-4686-4243-a0b4-3dd710ead568-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.319646 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09191ff5-4686-4243-a0b4-3dd710ead568-config\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.319736 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09191ff5-4686-4243-a0b4-3dd710ead568-scripts\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.323604 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/09191ff5-4686-4243-a0b4-3dd710ead568-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.325250 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09191ff5-4686-4243-a0b4-3dd710ead568-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.325584 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09191ff5-4686-4243-a0b4-3dd710ead568-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.342084 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8v6p\" (UniqueName: \"kubernetes.io/projected/09191ff5-4686-4243-a0b4-3dd710ead568-kube-api-access-s8v6p\") pod \"ovn-northd-0\" (UID: \"09191ff5-4686-4243-a0b4-3dd710ead568\") " pod="openstack/ovn-northd-0" Nov 24 18:05:23 crc kubenswrapper[4768]: I1124 18:05:23.406937 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 18:05:24 crc kubenswrapper[4768]: I1124 18:05:24.392290 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 18:05:24 crc kubenswrapper[4768]: I1124 18:05:24.729023 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"09191ff5-4686-4243-a0b4-3dd710ead568","Type":"ContainerStarted","Data":"76ee1267b8546d394f635708ad5f0e31e8035b69873f4dfa1614173125cde1c1"} Nov 24 18:05:26 crc kubenswrapper[4768]: I1124 18:05:26.212755 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:26 crc kubenswrapper[4768]: I1124 18:05:26.315688 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:05:26 crc kubenswrapper[4768]: I1124 18:05:26.375814 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kglqz"] Nov 24 18:05:26 crc kubenswrapper[4768]: I1124 18:05:26.742945 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" podUID="40c17639-b9a9-4576-9bea-30e780d4580d" containerName="dnsmasq-dns" containerID="cri-o://19774061fab14c7dfec003e8e85db8b371d1567e91d46c3aa6cb31d64acb1940" gracePeriod=10 Nov 24 18:05:27 crc kubenswrapper[4768]: I1124 18:05:27.001098 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 18:05:27 crc kubenswrapper[4768]: I1124 18:05:27.001164 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 18:05:27 crc kubenswrapper[4768]: I1124 18:05:27.754895 4768 generic.go:334] "Generic (PLEG): container finished" podID="40c17639-b9a9-4576-9bea-30e780d4580d" containerID="19774061fab14c7dfec003e8e85db8b371d1567e91d46c3aa6cb31d64acb1940" exitCode=0 Nov 24 18:05:27 crc kubenswrapper[4768]: I1124 18:05:27.754990 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" event={"ID":"40c17639-b9a9-4576-9bea-30e780d4580d","Type":"ContainerDied","Data":"19774061fab14c7dfec003e8e85db8b371d1567e91d46c3aa6cb31d64acb1940"} Nov 24 18:05:27 crc kubenswrapper[4768]: I1124 18:05:27.757455 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8145b894-fd09-47c1-b9c2-0cb4cfa6d293","Type":"ContainerStarted","Data":"2328b0be1837c11fce66d3e2bf90d67ad1158b402f0f93aaf237b456121ab2c2"} Nov 24 18:05:27 crc kubenswrapper[4768]: I1124 18:05:27.791327 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.428573916 podStartE2EDuration="33.791295298s" podCreationTimestamp="2025-11-24 18:04:54 +0000 UTC" firstStartedPulling="2025-11-24 18:05:07.088619232 +0000 UTC m=+945.949201009" lastFinishedPulling="2025-11-24 18:05:14.451340614 +0000 UTC m=+953.311922391" observedRunningTime="2025-11-24 18:05:27.787230353 +0000 UTC m=+966.647812140" watchObservedRunningTime="2025-11-24 18:05:27.791295298 +0000 UTC m=+966.651877115" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.289623 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.406398 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bhfm\" (UniqueName: \"kubernetes.io/projected/40c17639-b9a9-4576-9bea-30e780d4580d-kube-api-access-4bhfm\") pod \"40c17639-b9a9-4576-9bea-30e780d4580d\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.406530 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-dns-svc\") pod \"40c17639-b9a9-4576-9bea-30e780d4580d\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.406548 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-ovsdbserver-nb\") pod \"40c17639-b9a9-4576-9bea-30e780d4580d\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.406673 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-config\") pod \"40c17639-b9a9-4576-9bea-30e780d4580d\" (UID: \"40c17639-b9a9-4576-9bea-30e780d4580d\") " Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.411027 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c17639-b9a9-4576-9bea-30e780d4580d-kube-api-access-4bhfm" (OuterVolumeSpecName: "kube-api-access-4bhfm") pod "40c17639-b9a9-4576-9bea-30e780d4580d" (UID: "40c17639-b9a9-4576-9bea-30e780d4580d"). InnerVolumeSpecName "kube-api-access-4bhfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.442486 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40c17639-b9a9-4576-9bea-30e780d4580d" (UID: "40c17639-b9a9-4576-9bea-30e780d4580d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.443988 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-config" (OuterVolumeSpecName: "config") pod "40c17639-b9a9-4576-9bea-30e780d4580d" (UID: "40c17639-b9a9-4576-9bea-30e780d4580d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.450397 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40c17639-b9a9-4576-9bea-30e780d4580d" (UID: "40c17639-b9a9-4576-9bea-30e780d4580d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.508664 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.508710 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bhfm\" (UniqueName: \"kubernetes.io/projected/40c17639-b9a9-4576-9bea-30e780d4580d-kube-api-access-4bhfm\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.508725 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.508736 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40c17639-b9a9-4576-9bea-30e780d4580d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.767823 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"09191ff5-4686-4243-a0b4-3dd710ead568","Type":"ContainerStarted","Data":"a28e0535a5f169d6754b4313a15bae39578aee253d03163ff8c4ec38c7605d65"} Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.767872 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"09191ff5-4686-4243-a0b4-3dd710ead568","Type":"ContainerStarted","Data":"cefb834acd3849a9895c6edacbd5b4e5472609743e8c9592228b618294f5f22c"} Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.768198 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.770191 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" event={"ID":"40c17639-b9a9-4576-9bea-30e780d4580d","Type":"ContainerDied","Data":"bf585357c64d8203a5b18f69d2fe5b24b8be3edf555a53a4f78415f3d16e90a4"} Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.770246 4768 scope.go:117] "RemoveContainer" containerID="19774061fab14c7dfec003e8e85db8b371d1567e91d46c3aa6cb31d64acb1940" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.770388 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kglqz" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.796927 4768 scope.go:117] "RemoveContainer" containerID="10236cd7c2a276e58197636e0dd7c07970d2ae1a62b65cd0dddfa4c6fcdd9131" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.806548 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.240201258 podStartE2EDuration="5.806525216s" podCreationTimestamp="2025-11-24 18:05:23 +0000 UTC" firstStartedPulling="2025-11-24 18:05:24.397254137 +0000 UTC m=+963.257835924" lastFinishedPulling="2025-11-24 18:05:27.963578105 +0000 UTC m=+966.824159882" observedRunningTime="2025-11-24 18:05:28.790728163 +0000 UTC m=+967.651309940" watchObservedRunningTime="2025-11-24 18:05:28.806525216 +0000 UTC m=+967.667106993" Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.807936 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kglqz"] Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.815633 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kglqz"] Nov 24 18:05:28 crc kubenswrapper[4768]: I1124 18:05:28.949463 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 18:05:29 crc kubenswrapper[4768]: I1124 18:05:29.917739 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40c17639-b9a9-4576-9bea-30e780d4580d" path="/var/lib/kubelet/pods/40c17639-b9a9-4576-9bea-30e780d4580d/volumes" Nov 24 18:05:29 crc kubenswrapper[4768]: I1124 18:05:29.990022 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 18:05:30 crc kubenswrapper[4768]: I1124 18:05:30.057163 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 18:05:35 crc kubenswrapper[4768]: I1124 18:05:35.861070 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 18:05:35 crc kubenswrapper[4768]: I1124 18:05:35.861972 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 18:05:35 crc kubenswrapper[4768]: I1124 18:05:35.981612 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 18:05:36 crc kubenswrapper[4768]: I1124 18:05:36.907570 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.376063 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5bc6-account-create-pktcg"] Nov 24 18:05:37 crc kubenswrapper[4768]: E1124 18:05:37.376409 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c17639-b9a9-4576-9bea-30e780d4580d" containerName="dnsmasq-dns" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.376425 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c17639-b9a9-4576-9bea-30e780d4580d" containerName="dnsmasq-dns" Nov 24 18:05:37 crc kubenswrapper[4768]: E1124 18:05:37.376445 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c17639-b9a9-4576-9bea-30e780d4580d" containerName="init" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.376451 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c17639-b9a9-4576-9bea-30e780d4580d" containerName="init" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.376635 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c17639-b9a9-4576-9bea-30e780d4580d" containerName="dnsmasq-dns" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.377190 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bc6-account-create-pktcg" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.379539 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.382341 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rr57f"] Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.383462 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rr57f" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.390472 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bc6-account-create-pktcg"] Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.428796 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rr57f"] Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.466006 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s65cs\" (UniqueName: \"kubernetes.io/projected/61443b8e-bd3a-437e-8440-323561bc319b-kube-api-access-s65cs\") pod \"placement-db-create-rr57f\" (UID: \"61443b8e-bd3a-437e-8440-323561bc319b\") " pod="openstack/placement-db-create-rr57f" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.466063 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab99221a-fafe-469a-a7e1-3355f432075e-operator-scripts\") pod \"placement-5bc6-account-create-pktcg\" (UID: \"ab99221a-fafe-469a-a7e1-3355f432075e\") " pod="openstack/placement-5bc6-account-create-pktcg" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.466103 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prgrl\" (UniqueName: \"kubernetes.io/projected/ab99221a-fafe-469a-a7e1-3355f432075e-kube-api-access-prgrl\") pod \"placement-5bc6-account-create-pktcg\" (UID: \"ab99221a-fafe-469a-a7e1-3355f432075e\") " pod="openstack/placement-5bc6-account-create-pktcg" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.466187 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61443b8e-bd3a-437e-8440-323561bc319b-operator-scripts\") pod \"placement-db-create-rr57f\" (UID: \"61443b8e-bd3a-437e-8440-323561bc319b\") " pod="openstack/placement-db-create-rr57f" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.567592 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s65cs\" (UniqueName: \"kubernetes.io/projected/61443b8e-bd3a-437e-8440-323561bc319b-kube-api-access-s65cs\") pod \"placement-db-create-rr57f\" (UID: \"61443b8e-bd3a-437e-8440-323561bc319b\") " pod="openstack/placement-db-create-rr57f" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.567658 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab99221a-fafe-469a-a7e1-3355f432075e-operator-scripts\") pod \"placement-5bc6-account-create-pktcg\" (UID: \"ab99221a-fafe-469a-a7e1-3355f432075e\") " pod="openstack/placement-5bc6-account-create-pktcg" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.567699 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prgrl\" (UniqueName: \"kubernetes.io/projected/ab99221a-fafe-469a-a7e1-3355f432075e-kube-api-access-prgrl\") pod \"placement-5bc6-account-create-pktcg\" (UID: \"ab99221a-fafe-469a-a7e1-3355f432075e\") " pod="openstack/placement-5bc6-account-create-pktcg" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.567766 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61443b8e-bd3a-437e-8440-323561bc319b-operator-scripts\") pod \"placement-db-create-rr57f\" (UID: \"61443b8e-bd3a-437e-8440-323561bc319b\") " pod="openstack/placement-db-create-rr57f" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.568589 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61443b8e-bd3a-437e-8440-323561bc319b-operator-scripts\") pod \"placement-db-create-rr57f\" (UID: \"61443b8e-bd3a-437e-8440-323561bc319b\") " pod="openstack/placement-db-create-rr57f" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.568713 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab99221a-fafe-469a-a7e1-3355f432075e-operator-scripts\") pod \"placement-5bc6-account-create-pktcg\" (UID: \"ab99221a-fafe-469a-a7e1-3355f432075e\") " pod="openstack/placement-5bc6-account-create-pktcg" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.585585 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s65cs\" (UniqueName: \"kubernetes.io/projected/61443b8e-bd3a-437e-8440-323561bc319b-kube-api-access-s65cs\") pod \"placement-db-create-rr57f\" (UID: \"61443b8e-bd3a-437e-8440-323561bc319b\") " pod="openstack/placement-db-create-rr57f" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.585622 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prgrl\" (UniqueName: \"kubernetes.io/projected/ab99221a-fafe-469a-a7e1-3355f432075e-kube-api-access-prgrl\") pod \"placement-5bc6-account-create-pktcg\" (UID: \"ab99221a-fafe-469a-a7e1-3355f432075e\") " pod="openstack/placement-5bc6-account-create-pktcg" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.702246 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bc6-account-create-pktcg" Nov 24 18:05:37 crc kubenswrapper[4768]: I1124 18:05:37.710517 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rr57f" Nov 24 18:05:38 crc kubenswrapper[4768]: I1124 18:05:38.155035 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bc6-account-create-pktcg"] Nov 24 18:05:38 crc kubenswrapper[4768]: W1124 18:05:38.163536 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab99221a_fafe_469a_a7e1_3355f432075e.slice/crio-f15bb9fa0f4d356a7358642e1a0debe8957c6dd9883fbee89808f30e0acaa0ff WatchSource:0}: Error finding container f15bb9fa0f4d356a7358642e1a0debe8957c6dd9883fbee89808f30e0acaa0ff: Status 404 returned error can't find the container with id f15bb9fa0f4d356a7358642e1a0debe8957c6dd9883fbee89808f30e0acaa0ff Nov 24 18:05:38 crc kubenswrapper[4768]: I1124 18:05:38.214838 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rr57f"] Nov 24 18:05:38 crc kubenswrapper[4768]: W1124 18:05:38.219073 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61443b8e_bd3a_437e_8440_323561bc319b.slice/crio-bb2703024f9beeaee6f01a20f069c5558cb8a8a61d627e6a9d895b1f6d768aa5 WatchSource:0}: Error finding container bb2703024f9beeaee6f01a20f069c5558cb8a8a61d627e6a9d895b1f6d768aa5: Status 404 returned error can't find the container with id bb2703024f9beeaee6f01a20f069c5558cb8a8a61d627e6a9d895b1f6d768aa5 Nov 24 18:05:38 crc kubenswrapper[4768]: I1124 18:05:38.471712 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 18:05:38 crc kubenswrapper[4768]: I1124 18:05:38.849476 4768 generic.go:334] "Generic (PLEG): container finished" podID="ab99221a-fafe-469a-a7e1-3355f432075e" containerID="a169a73070d4287c5b74c781368efec82b49ac4e6b5f372cf041e2a54b5af230" exitCode=0 Nov 24 18:05:38 crc kubenswrapper[4768]: I1124 18:05:38.849525 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bc6-account-create-pktcg" event={"ID":"ab99221a-fafe-469a-a7e1-3355f432075e","Type":"ContainerDied","Data":"a169a73070d4287c5b74c781368efec82b49ac4e6b5f372cf041e2a54b5af230"} Nov 24 18:05:38 crc kubenswrapper[4768]: I1124 18:05:38.850333 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bc6-account-create-pktcg" event={"ID":"ab99221a-fafe-469a-a7e1-3355f432075e","Type":"ContainerStarted","Data":"f15bb9fa0f4d356a7358642e1a0debe8957c6dd9883fbee89808f30e0acaa0ff"} Nov 24 18:05:38 crc kubenswrapper[4768]: I1124 18:05:38.852321 4768 generic.go:334] "Generic (PLEG): container finished" podID="61443b8e-bd3a-437e-8440-323561bc319b" containerID="4dfff93858a5196489734d7e1c0d2b60a4876101d5d156beb41ed593099ac4b3" exitCode=0 Nov 24 18:05:38 crc kubenswrapper[4768]: I1124 18:05:38.852354 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rr57f" event={"ID":"61443b8e-bd3a-437e-8440-323561bc319b","Type":"ContainerDied","Data":"4dfff93858a5196489734d7e1c0d2b60a4876101d5d156beb41ed593099ac4b3"} Nov 24 18:05:38 crc kubenswrapper[4768]: I1124 18:05:38.852373 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rr57f" event={"ID":"61443b8e-bd3a-437e-8440-323561bc319b","Type":"ContainerStarted","Data":"bb2703024f9beeaee6f01a20f069c5558cb8a8a61d627e6a9d895b1f6d768aa5"} Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.225338 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bc6-account-create-pktcg" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.232851 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rr57f" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.315405 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prgrl\" (UniqueName: \"kubernetes.io/projected/ab99221a-fafe-469a-a7e1-3355f432075e-kube-api-access-prgrl\") pod \"ab99221a-fafe-469a-a7e1-3355f432075e\" (UID: \"ab99221a-fafe-469a-a7e1-3355f432075e\") " Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.315548 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s65cs\" (UniqueName: \"kubernetes.io/projected/61443b8e-bd3a-437e-8440-323561bc319b-kube-api-access-s65cs\") pod \"61443b8e-bd3a-437e-8440-323561bc319b\" (UID: \"61443b8e-bd3a-437e-8440-323561bc319b\") " Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.315568 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61443b8e-bd3a-437e-8440-323561bc319b-operator-scripts\") pod \"61443b8e-bd3a-437e-8440-323561bc319b\" (UID: \"61443b8e-bd3a-437e-8440-323561bc319b\") " Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.315709 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab99221a-fafe-469a-a7e1-3355f432075e-operator-scripts\") pod \"ab99221a-fafe-469a-a7e1-3355f432075e\" (UID: \"ab99221a-fafe-469a-a7e1-3355f432075e\") " Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.316540 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61443b8e-bd3a-437e-8440-323561bc319b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61443b8e-bd3a-437e-8440-323561bc319b" (UID: "61443b8e-bd3a-437e-8440-323561bc319b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.316713 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab99221a-fafe-469a-a7e1-3355f432075e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab99221a-fafe-469a-a7e1-3355f432075e" (UID: "ab99221a-fafe-469a-a7e1-3355f432075e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.321798 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61443b8e-bd3a-437e-8440-323561bc319b-kube-api-access-s65cs" (OuterVolumeSpecName: "kube-api-access-s65cs") pod "61443b8e-bd3a-437e-8440-323561bc319b" (UID: "61443b8e-bd3a-437e-8440-323561bc319b"). InnerVolumeSpecName "kube-api-access-s65cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.322549 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab99221a-fafe-469a-a7e1-3355f432075e-kube-api-access-prgrl" (OuterVolumeSpecName: "kube-api-access-prgrl") pod "ab99221a-fafe-469a-a7e1-3355f432075e" (UID: "ab99221a-fafe-469a-a7e1-3355f432075e"). InnerVolumeSpecName "kube-api-access-prgrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.417567 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab99221a-fafe-469a-a7e1-3355f432075e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.417631 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prgrl\" (UniqueName: \"kubernetes.io/projected/ab99221a-fafe-469a-a7e1-3355f432075e-kube-api-access-prgrl\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.417654 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61443b8e-bd3a-437e-8440-323561bc319b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.417672 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s65cs\" (UniqueName: \"kubernetes.io/projected/61443b8e-bd3a-437e-8440-323561bc319b-kube-api-access-s65cs\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.868115 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rr57f" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.868116 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rr57f" event={"ID":"61443b8e-bd3a-437e-8440-323561bc319b","Type":"ContainerDied","Data":"bb2703024f9beeaee6f01a20f069c5558cb8a8a61d627e6a9d895b1f6d768aa5"} Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.868244 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb2703024f9beeaee6f01a20f069c5558cb8a8a61d627e6a9d895b1f6d768aa5" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.870031 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bc6-account-create-pktcg" event={"ID":"ab99221a-fafe-469a-a7e1-3355f432075e","Type":"ContainerDied","Data":"f15bb9fa0f4d356a7358642e1a0debe8957c6dd9883fbee89808f30e0acaa0ff"} Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.870088 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f15bb9fa0f4d356a7358642e1a0debe8957c6dd9883fbee89808f30e0acaa0ff" Nov 24 18:05:40 crc kubenswrapper[4768]: I1124 18:05:40.870196 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bc6-account-create-pktcg" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.564850 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tlqfl"] Nov 24 18:05:42 crc kubenswrapper[4768]: E1124 18:05:42.565613 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61443b8e-bd3a-437e-8440-323561bc319b" containerName="mariadb-database-create" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.565629 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="61443b8e-bd3a-437e-8440-323561bc319b" containerName="mariadb-database-create" Nov 24 18:05:42 crc kubenswrapper[4768]: E1124 18:05:42.565646 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab99221a-fafe-469a-a7e1-3355f432075e" containerName="mariadb-account-create" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.565653 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab99221a-fafe-469a-a7e1-3355f432075e" containerName="mariadb-account-create" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.565824 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="61443b8e-bd3a-437e-8440-323561bc319b" containerName="mariadb-database-create" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.565842 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab99221a-fafe-469a-a7e1-3355f432075e" containerName="mariadb-account-create" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.566543 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tlqfl" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.576346 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tlqfl"] Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.651365 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa4d5295-ba8b-4369-a191-2e51f0cf1d51-operator-scripts\") pod \"glance-db-create-tlqfl\" (UID: \"fa4d5295-ba8b-4369-a191-2e51f0cf1d51\") " pod="openstack/glance-db-create-tlqfl" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.651421 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjs9p\" (UniqueName: \"kubernetes.io/projected/fa4d5295-ba8b-4369-a191-2e51f0cf1d51-kube-api-access-vjs9p\") pod \"glance-db-create-tlqfl\" (UID: \"fa4d5295-ba8b-4369-a191-2e51f0cf1d51\") " pod="openstack/glance-db-create-tlqfl" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.671913 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8f20-account-create-b25tz"] Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.672927 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8f20-account-create-b25tz" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.675821 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.677819 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8f20-account-create-b25tz"] Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.753191 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa4d5295-ba8b-4369-a191-2e51f0cf1d51-operator-scripts\") pod \"glance-db-create-tlqfl\" (UID: \"fa4d5295-ba8b-4369-a191-2e51f0cf1d51\") " pod="openstack/glance-db-create-tlqfl" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.753236 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjs9p\" (UniqueName: \"kubernetes.io/projected/fa4d5295-ba8b-4369-a191-2e51f0cf1d51-kube-api-access-vjs9p\") pod \"glance-db-create-tlqfl\" (UID: \"fa4d5295-ba8b-4369-a191-2e51f0cf1d51\") " pod="openstack/glance-db-create-tlqfl" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.753295 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52affb5e-149e-4868-a48d-4f4ab569947a-operator-scripts\") pod \"glance-8f20-account-create-b25tz\" (UID: \"52affb5e-149e-4868-a48d-4f4ab569947a\") " pod="openstack/glance-8f20-account-create-b25tz" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.753338 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvrkh\" (UniqueName: \"kubernetes.io/projected/52affb5e-149e-4868-a48d-4f4ab569947a-kube-api-access-fvrkh\") pod \"glance-8f20-account-create-b25tz\" (UID: \"52affb5e-149e-4868-a48d-4f4ab569947a\") " pod="openstack/glance-8f20-account-create-b25tz" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.754075 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa4d5295-ba8b-4369-a191-2e51f0cf1d51-operator-scripts\") pod \"glance-db-create-tlqfl\" (UID: \"fa4d5295-ba8b-4369-a191-2e51f0cf1d51\") " pod="openstack/glance-db-create-tlqfl" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.771231 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjs9p\" (UniqueName: \"kubernetes.io/projected/fa4d5295-ba8b-4369-a191-2e51f0cf1d51-kube-api-access-vjs9p\") pod \"glance-db-create-tlqfl\" (UID: \"fa4d5295-ba8b-4369-a191-2e51f0cf1d51\") " pod="openstack/glance-db-create-tlqfl" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.854183 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52affb5e-149e-4868-a48d-4f4ab569947a-operator-scripts\") pod \"glance-8f20-account-create-b25tz\" (UID: \"52affb5e-149e-4868-a48d-4f4ab569947a\") " pod="openstack/glance-8f20-account-create-b25tz" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.854260 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvrkh\" (UniqueName: \"kubernetes.io/projected/52affb5e-149e-4868-a48d-4f4ab569947a-kube-api-access-fvrkh\") pod \"glance-8f20-account-create-b25tz\" (UID: \"52affb5e-149e-4868-a48d-4f4ab569947a\") " pod="openstack/glance-8f20-account-create-b25tz" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.855389 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52affb5e-149e-4868-a48d-4f4ab569947a-operator-scripts\") pod \"glance-8f20-account-create-b25tz\" (UID: \"52affb5e-149e-4868-a48d-4f4ab569947a\") " pod="openstack/glance-8f20-account-create-b25tz" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.875479 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvrkh\" (UniqueName: \"kubernetes.io/projected/52affb5e-149e-4868-a48d-4f4ab569947a-kube-api-access-fvrkh\") pod \"glance-8f20-account-create-b25tz\" (UID: \"52affb5e-149e-4868-a48d-4f4ab569947a\") " pod="openstack/glance-8f20-account-create-b25tz" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.888612 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tlqfl" Nov 24 18:05:42 crc kubenswrapper[4768]: I1124 18:05:42.994593 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8f20-account-create-b25tz" Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.311376 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tlqfl"] Nov 24 18:05:43 crc kubenswrapper[4768]: W1124 18:05:43.320446 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa4d5295_ba8b_4369_a191_2e51f0cf1d51.slice/crio-4fd67a90895c87e8a4d98a0fd88364df44ecb814ec0fb278b910c1e43bd04a27 WatchSource:0}: Error finding container 4fd67a90895c87e8a4d98a0fd88364df44ecb814ec0fb278b910c1e43bd04a27: Status 404 returned error can't find the container with id 4fd67a90895c87e8a4d98a0fd88364df44ecb814ec0fb278b910c1e43bd04a27 Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.411286 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8f20-account-create-b25tz"] Nov 24 18:05:43 crc kubenswrapper[4768]: W1124 18:05:43.413823 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52affb5e_149e_4868_a48d_4f4ab569947a.slice/crio-952f942e36a6689c99ca66a3d8461c0d22b528aabe56a634795414d967f0591d WatchSource:0}: Error finding container 952f942e36a6689c99ca66a3d8461c0d22b528aabe56a634795414d967f0591d: Status 404 returned error can't find the container with id 952f942e36a6689c99ca66a3d8461c0d22b528aabe56a634795414d967f0591d Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.656987 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.657093 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.657192 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.658453 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b1fcca249f25d296bfba4402fd65255a8a672ed04eb8c495487a6905cab2500"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.658601 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://5b1fcca249f25d296bfba4402fd65255a8a672ed04eb8c495487a6905cab2500" gracePeriod=600 Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.892585 4768 generic.go:334] "Generic (PLEG): container finished" podID="fa4d5295-ba8b-4369-a191-2e51f0cf1d51" containerID="3f1429823adb11918549a411b384624645b80d96242d15a303ea7cb45600c115" exitCode=0 Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.892688 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tlqfl" event={"ID":"fa4d5295-ba8b-4369-a191-2e51f0cf1d51","Type":"ContainerDied","Data":"3f1429823adb11918549a411b384624645b80d96242d15a303ea7cb45600c115"} Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.893020 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tlqfl" event={"ID":"fa4d5295-ba8b-4369-a191-2e51f0cf1d51","Type":"ContainerStarted","Data":"4fd67a90895c87e8a4d98a0fd88364df44ecb814ec0fb278b910c1e43bd04a27"} Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.894539 4768 generic.go:334] "Generic (PLEG): container finished" podID="52affb5e-149e-4868-a48d-4f4ab569947a" containerID="6ef4d6d26867bad5c71db8fe356bb1acabb8dcb554eecea630377aa8eafa3df9" exitCode=0 Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.894591 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8f20-account-create-b25tz" event={"ID":"52affb5e-149e-4868-a48d-4f4ab569947a","Type":"ContainerDied","Data":"6ef4d6d26867bad5c71db8fe356bb1acabb8dcb554eecea630377aa8eafa3df9"} Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.894624 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8f20-account-create-b25tz" event={"ID":"52affb5e-149e-4868-a48d-4f4ab569947a","Type":"ContainerStarted","Data":"952f942e36a6689c99ca66a3d8461c0d22b528aabe56a634795414d967f0591d"} Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.899348 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="5b1fcca249f25d296bfba4402fd65255a8a672ed04eb8c495487a6905cab2500" exitCode=0 Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.909414 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"5b1fcca249f25d296bfba4402fd65255a8a672ed04eb8c495487a6905cab2500"} Nov 24 18:05:43 crc kubenswrapper[4768]: I1124 18:05:43.909474 4768 scope.go:117] "RemoveContainer" containerID="b4583a9ac279158eca4e8f57a4180ced088f2fed29490556a10e250154558a77" Nov 24 18:05:44 crc kubenswrapper[4768]: I1124 18:05:44.919402 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"99ccf8cd01116f9aed046232143cdd0d069d3d1d4cac3ec060c0e2b82cb26f4b"} Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.326364 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tlqfl" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.333365 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8f20-account-create-b25tz" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.392444 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvrkh\" (UniqueName: \"kubernetes.io/projected/52affb5e-149e-4868-a48d-4f4ab569947a-kube-api-access-fvrkh\") pod \"52affb5e-149e-4868-a48d-4f4ab569947a\" (UID: \"52affb5e-149e-4868-a48d-4f4ab569947a\") " Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.392883 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjs9p\" (UniqueName: \"kubernetes.io/projected/fa4d5295-ba8b-4369-a191-2e51f0cf1d51-kube-api-access-vjs9p\") pod \"fa4d5295-ba8b-4369-a191-2e51f0cf1d51\" (UID: \"fa4d5295-ba8b-4369-a191-2e51f0cf1d51\") " Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.393022 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52affb5e-149e-4868-a48d-4f4ab569947a-operator-scripts\") pod \"52affb5e-149e-4868-a48d-4f4ab569947a\" (UID: \"52affb5e-149e-4868-a48d-4f4ab569947a\") " Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.393070 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa4d5295-ba8b-4369-a191-2e51f0cf1d51-operator-scripts\") pod \"fa4d5295-ba8b-4369-a191-2e51f0cf1d51\" (UID: \"fa4d5295-ba8b-4369-a191-2e51f0cf1d51\") " Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.393878 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52affb5e-149e-4868-a48d-4f4ab569947a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52affb5e-149e-4868-a48d-4f4ab569947a" (UID: "52affb5e-149e-4868-a48d-4f4ab569947a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.393924 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa4d5295-ba8b-4369-a191-2e51f0cf1d51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa4d5295-ba8b-4369-a191-2e51f0cf1d51" (UID: "fa4d5295-ba8b-4369-a191-2e51f0cf1d51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.400698 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52affb5e-149e-4868-a48d-4f4ab569947a-kube-api-access-fvrkh" (OuterVolumeSpecName: "kube-api-access-fvrkh") pod "52affb5e-149e-4868-a48d-4f4ab569947a" (UID: "52affb5e-149e-4868-a48d-4f4ab569947a"). InnerVolumeSpecName "kube-api-access-fvrkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.401205 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4d5295-ba8b-4369-a191-2e51f0cf1d51-kube-api-access-vjs9p" (OuterVolumeSpecName: "kube-api-access-vjs9p") pod "fa4d5295-ba8b-4369-a191-2e51f0cf1d51" (UID: "fa4d5295-ba8b-4369-a191-2e51f0cf1d51"). InnerVolumeSpecName "kube-api-access-vjs9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.495376 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52affb5e-149e-4868-a48d-4f4ab569947a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.495634 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa4d5295-ba8b-4369-a191-2e51f0cf1d51-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.495726 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvrkh\" (UniqueName: \"kubernetes.io/projected/52affb5e-149e-4868-a48d-4f4ab569947a-kube-api-access-fvrkh\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.495796 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjs9p\" (UniqueName: \"kubernetes.io/projected/fa4d5295-ba8b-4369-a191-2e51f0cf1d51-kube-api-access-vjs9p\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.933103 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tlqfl" event={"ID":"fa4d5295-ba8b-4369-a191-2e51f0cf1d51","Type":"ContainerDied","Data":"4fd67a90895c87e8a4d98a0fd88364df44ecb814ec0fb278b910c1e43bd04a27"} Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.934065 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tlqfl" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.936910 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fd67a90895c87e8a4d98a0fd88364df44ecb814ec0fb278b910c1e43bd04a27" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.938952 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8f20-account-create-b25tz" event={"ID":"52affb5e-149e-4868-a48d-4f4ab569947a","Type":"ContainerDied","Data":"952f942e36a6689c99ca66a3d8461c0d22b528aabe56a634795414d967f0591d"} Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.938992 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="952f942e36a6689c99ca66a3d8461c0d22b528aabe56a634795414d967f0591d" Nov 24 18:05:45 crc kubenswrapper[4768]: I1124 18:05:45.939021 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8f20-account-create-b25tz" Nov 24 18:05:46 crc kubenswrapper[4768]: I1124 18:05:46.920148 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-dt8hn"] Nov 24 18:05:46 crc kubenswrapper[4768]: E1124 18:05:46.920988 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4d5295-ba8b-4369-a191-2e51f0cf1d51" containerName="mariadb-database-create" Nov 24 18:05:46 crc kubenswrapper[4768]: I1124 18:05:46.921013 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4d5295-ba8b-4369-a191-2e51f0cf1d51" containerName="mariadb-database-create" Nov 24 18:05:46 crc kubenswrapper[4768]: E1124 18:05:46.921044 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52affb5e-149e-4868-a48d-4f4ab569947a" containerName="mariadb-account-create" Nov 24 18:05:46 crc kubenswrapper[4768]: I1124 18:05:46.921053 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="52affb5e-149e-4868-a48d-4f4ab569947a" containerName="mariadb-account-create" Nov 24 18:05:46 crc kubenswrapper[4768]: I1124 18:05:46.921248 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa4d5295-ba8b-4369-a191-2e51f0cf1d51" containerName="mariadb-database-create" Nov 24 18:05:46 crc kubenswrapper[4768]: I1124 18:05:46.921264 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="52affb5e-149e-4868-a48d-4f4ab569947a" containerName="mariadb-account-create" Nov 24 18:05:46 crc kubenswrapper[4768]: I1124 18:05:46.921903 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dt8hn" Nov 24 18:05:46 crc kubenswrapper[4768]: I1124 18:05:46.929811 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dt8hn"] Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.019968 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d0123e4-321e-46c6-9fad-ab2860c14050-operator-scripts\") pod \"keystone-db-create-dt8hn\" (UID: \"5d0123e4-321e-46c6-9fad-ab2860c14050\") " pod="openstack/keystone-db-create-dt8hn" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.020029 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwmm9\" (UniqueName: \"kubernetes.io/projected/5d0123e4-321e-46c6-9fad-ab2860c14050-kube-api-access-xwmm9\") pod \"keystone-db-create-dt8hn\" (UID: \"5d0123e4-321e-46c6-9fad-ab2860c14050\") " pod="openstack/keystone-db-create-dt8hn" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.027056 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1661-account-create-6bzhc"] Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.028318 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1661-account-create-6bzhc" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.031307 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.036825 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1661-account-create-6bzhc"] Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.121595 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae89b236-a8cc-49bc-8ad3-6601f4b97450-operator-scripts\") pod \"keystone-1661-account-create-6bzhc\" (UID: \"ae89b236-a8cc-49bc-8ad3-6601f4b97450\") " pod="openstack/keystone-1661-account-create-6bzhc" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.121705 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d0123e4-321e-46c6-9fad-ab2860c14050-operator-scripts\") pod \"keystone-db-create-dt8hn\" (UID: \"5d0123e4-321e-46c6-9fad-ab2860c14050\") " pod="openstack/keystone-db-create-dt8hn" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.121786 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwmm9\" (UniqueName: \"kubernetes.io/projected/5d0123e4-321e-46c6-9fad-ab2860c14050-kube-api-access-xwmm9\") pod \"keystone-db-create-dt8hn\" (UID: \"5d0123e4-321e-46c6-9fad-ab2860c14050\") " pod="openstack/keystone-db-create-dt8hn" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.121886 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmxq9\" (UniqueName: \"kubernetes.io/projected/ae89b236-a8cc-49bc-8ad3-6601f4b97450-kube-api-access-bmxq9\") pod \"keystone-1661-account-create-6bzhc\" (UID: \"ae89b236-a8cc-49bc-8ad3-6601f4b97450\") " pod="openstack/keystone-1661-account-create-6bzhc" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.122420 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d0123e4-321e-46c6-9fad-ab2860c14050-operator-scripts\") pod \"keystone-db-create-dt8hn\" (UID: \"5d0123e4-321e-46c6-9fad-ab2860c14050\") " pod="openstack/keystone-db-create-dt8hn" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.141580 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwmm9\" (UniqueName: \"kubernetes.io/projected/5d0123e4-321e-46c6-9fad-ab2860c14050-kube-api-access-xwmm9\") pod \"keystone-db-create-dt8hn\" (UID: \"5d0123e4-321e-46c6-9fad-ab2860c14050\") " pod="openstack/keystone-db-create-dt8hn" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.222710 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmxq9\" (UniqueName: \"kubernetes.io/projected/ae89b236-a8cc-49bc-8ad3-6601f4b97450-kube-api-access-bmxq9\") pod \"keystone-1661-account-create-6bzhc\" (UID: \"ae89b236-a8cc-49bc-8ad3-6601f4b97450\") " pod="openstack/keystone-1661-account-create-6bzhc" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.223141 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae89b236-a8cc-49bc-8ad3-6601f4b97450-operator-scripts\") pod \"keystone-1661-account-create-6bzhc\" (UID: \"ae89b236-a8cc-49bc-8ad3-6601f4b97450\") " pod="openstack/keystone-1661-account-create-6bzhc" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.224017 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae89b236-a8cc-49bc-8ad3-6601f4b97450-operator-scripts\") pod \"keystone-1661-account-create-6bzhc\" (UID: \"ae89b236-a8cc-49bc-8ad3-6601f4b97450\") " pod="openstack/keystone-1661-account-create-6bzhc" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.240358 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmxq9\" (UniqueName: \"kubernetes.io/projected/ae89b236-a8cc-49bc-8ad3-6601f4b97450-kube-api-access-bmxq9\") pod \"keystone-1661-account-create-6bzhc\" (UID: \"ae89b236-a8cc-49bc-8ad3-6601f4b97450\") " pod="openstack/keystone-1661-account-create-6bzhc" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.245192 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dt8hn" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.344928 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1661-account-create-6bzhc" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.681325 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dt8hn"] Nov 24 18:05:47 crc kubenswrapper[4768]: W1124 18:05:47.684046 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d0123e4_321e_46c6_9fad_ab2860c14050.slice/crio-c4b636abfaf28b18ff4b2fc2a166f7850b72d245a496d0af4f839a9ac70b8746 WatchSource:0}: Error finding container c4b636abfaf28b18ff4b2fc2a166f7850b72d245a496d0af4f839a9ac70b8746: Status 404 returned error can't find the container with id c4b636abfaf28b18ff4b2fc2a166f7850b72d245a496d0af4f839a9ac70b8746 Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.782365 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1661-account-create-6bzhc"] Nov 24 18:05:47 crc kubenswrapper[4768]: W1124 18:05:47.787862 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae89b236_a8cc_49bc_8ad3_6601f4b97450.slice/crio-ef9ceb6c0dbac5acca66ee42c775c8beeae7e299ad4ab2bbacee93d16abd9bc1 WatchSource:0}: Error finding container ef9ceb6c0dbac5acca66ee42c775c8beeae7e299ad4ab2bbacee93d16abd9bc1: Status 404 returned error can't find the container with id ef9ceb6c0dbac5acca66ee42c775c8beeae7e299ad4ab2bbacee93d16abd9bc1 Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.814227 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-q6fzj"] Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.816822 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.819337 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-t2kxl" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.819375 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.823873 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-q6fzj"] Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.833336 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-combined-ca-bundle\") pod \"glance-db-sync-q6fzj\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.833408 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-config-data\") pod \"glance-db-sync-q6fzj\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.833434 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-db-sync-config-data\") pod \"glance-db-sync-q6fzj\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.833457 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvk6r\" (UniqueName: \"kubernetes.io/projected/6d3c858f-af78-4df6-b30a-b7921b5a80f3-kube-api-access-fvk6r\") pod \"glance-db-sync-q6fzj\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.935318 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-combined-ca-bundle\") pod \"glance-db-sync-q6fzj\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.935460 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-config-data\") pod \"glance-db-sync-q6fzj\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.936321 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-db-sync-config-data\") pod \"glance-db-sync-q6fzj\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.936418 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvk6r\" (UniqueName: \"kubernetes.io/projected/6d3c858f-af78-4df6-b30a-b7921b5a80f3-kube-api-access-fvk6r\") pod \"glance-db-sync-q6fzj\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.945522 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-db-sync-config-data\") pod \"glance-db-sync-q6fzj\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.945734 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-config-data\") pod \"glance-db-sync-q6fzj\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.946786 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-combined-ca-bundle\") pod \"glance-db-sync-q6fzj\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.952924 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvk6r\" (UniqueName: \"kubernetes.io/projected/6d3c858f-af78-4df6-b30a-b7921b5a80f3-kube-api-access-fvk6r\") pod \"glance-db-sync-q6fzj\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.980038 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dt8hn" event={"ID":"5d0123e4-321e-46c6-9fad-ab2860c14050","Type":"ContainerStarted","Data":"c565963c12bbdeecd4b0562451d2fdc911dc48360cd7c249d9642fe77b841227"} Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.980108 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dt8hn" event={"ID":"5d0123e4-321e-46c6-9fad-ab2860c14050","Type":"ContainerStarted","Data":"c4b636abfaf28b18ff4b2fc2a166f7850b72d245a496d0af4f839a9ac70b8746"} Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.983841 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1661-account-create-6bzhc" event={"ID":"ae89b236-a8cc-49bc-8ad3-6601f4b97450","Type":"ContainerStarted","Data":"ada2beb9e7d37911e3ac421d94c6a5979ba2df3ab7033a8dfa9ef7f3ac59d407"} Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.983993 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1661-account-create-6bzhc" event={"ID":"ae89b236-a8cc-49bc-8ad3-6601f4b97450","Type":"ContainerStarted","Data":"ef9ceb6c0dbac5acca66ee42c775c8beeae7e299ad4ab2bbacee93d16abd9bc1"} Nov 24 18:05:47 crc kubenswrapper[4768]: I1124 18:05:47.997231 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-dt8hn" podStartSLOduration=1.997211401 podStartE2EDuration="1.997211401s" podCreationTimestamp="2025-11-24 18:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:05:47.995213842 +0000 UTC m=+986.855795619" watchObservedRunningTime="2025-11-24 18:05:47.997211401 +0000 UTC m=+986.857793188" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.014234 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-1661-account-create-6bzhc" podStartSLOduration=1.014213889 podStartE2EDuration="1.014213889s" podCreationTimestamp="2025-11-24 18:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:05:48.012953828 +0000 UTC m=+986.873535605" watchObservedRunningTime="2025-11-24 18:05:48.014213889 +0000 UTC m=+986.874795666" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.105832 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zlg8p" podUID="710c430d-b973-47b9-9917-2db7864f7570" containerName="ovn-controller" probeResult="failure" output=< Nov 24 18:05:48 crc kubenswrapper[4768]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 18:05:48 crc kubenswrapper[4768]: > Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.120643 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.132721 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-q6fzj" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.132879 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xb8qp" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.360612 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zlg8p-config-qzfhh"] Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.362420 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.364919 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.369050 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zlg8p-config-qzfhh"] Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.446732 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/832b8600-b3bb-478f-a494-c4ace355a732-scripts\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.446832 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/832b8600-b3bb-478f-a494-c4ace355a732-additional-scripts\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.446877 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-log-ovn\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.446916 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-run-ovn\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.446938 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2ggc\" (UniqueName: \"kubernetes.io/projected/832b8600-b3bb-478f-a494-c4ace355a732-kube-api-access-w2ggc\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.446979 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-run\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.549108 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-log-ovn\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.549196 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-run-ovn\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.549242 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ggc\" (UniqueName: \"kubernetes.io/projected/832b8600-b3bb-478f-a494-c4ace355a732-kube-api-access-w2ggc\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.549295 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-run\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.549321 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/832b8600-b3bb-478f-a494-c4ace355a732-scripts\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.549395 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/832b8600-b3bb-478f-a494-c4ace355a732-additional-scripts\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.549442 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-log-ovn\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.549604 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-run-ovn\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.549663 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-run\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.551166 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/832b8600-b3bb-478f-a494-c4ace355a732-additional-scripts\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.551504 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/832b8600-b3bb-478f-a494-c4ace355a732-scripts\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.569925 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2ggc\" (UniqueName: \"kubernetes.io/projected/832b8600-b3bb-478f-a494-c4ace355a732-kube-api-access-w2ggc\") pod \"ovn-controller-zlg8p-config-qzfhh\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.658730 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-q6fzj"] Nov 24 18:05:48 crc kubenswrapper[4768]: W1124 18:05:48.664583 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3c858f_af78_4df6_b30a_b7921b5a80f3.slice/crio-a9addc227f200ded443770784d2427eb57fdfdd970bb38673a60d8f3cffeefda WatchSource:0}: Error finding container a9addc227f200ded443770784d2427eb57fdfdd970bb38673a60d8f3cffeefda: Status 404 returned error can't find the container with id a9addc227f200ded443770784d2427eb57fdfdd970bb38673a60d8f3cffeefda Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.686243 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.995714 4768 generic.go:334] "Generic (PLEG): container finished" podID="5d0123e4-321e-46c6-9fad-ab2860c14050" containerID="c565963c12bbdeecd4b0562451d2fdc911dc48360cd7c249d9642fe77b841227" exitCode=0 Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.995786 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dt8hn" event={"ID":"5d0123e4-321e-46c6-9fad-ab2860c14050","Type":"ContainerDied","Data":"c565963c12bbdeecd4b0562451d2fdc911dc48360cd7c249d9642fe77b841227"} Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.998007 4768 generic.go:334] "Generic (PLEG): container finished" podID="ae89b236-a8cc-49bc-8ad3-6601f4b97450" containerID="ada2beb9e7d37911e3ac421d94c6a5979ba2df3ab7033a8dfa9ef7f3ac59d407" exitCode=0 Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.998047 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1661-account-create-6bzhc" event={"ID":"ae89b236-a8cc-49bc-8ad3-6601f4b97450","Type":"ContainerDied","Data":"ada2beb9e7d37911e3ac421d94c6a5979ba2df3ab7033a8dfa9ef7f3ac59d407"} Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.999541 4768 generic.go:334] "Generic (PLEG): container finished" podID="f67f41ac-4a1d-45c4-baaf-500062871fcb" containerID="c79650b18ecd2360097a631b234f9877cee7111a7b8d25423597bf6bc329515b" exitCode=0 Nov 24 18:05:48 crc kubenswrapper[4768]: I1124 18:05:48.999604 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f67f41ac-4a1d-45c4-baaf-500062871fcb","Type":"ContainerDied","Data":"c79650b18ecd2360097a631b234f9877cee7111a7b8d25423597bf6bc329515b"} Nov 24 18:05:49 crc kubenswrapper[4768]: I1124 18:05:49.003387 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-q6fzj" event={"ID":"6d3c858f-af78-4df6-b30a-b7921b5a80f3","Type":"ContainerStarted","Data":"a9addc227f200ded443770784d2427eb57fdfdd970bb38673a60d8f3cffeefda"} Nov 24 18:05:49 crc kubenswrapper[4768]: I1124 18:05:49.130867 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zlg8p-config-qzfhh"] Nov 24 18:05:49 crc kubenswrapper[4768]: W1124 18:05:49.134529 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod832b8600_b3bb_478f_a494_c4ace355a732.slice/crio-8c5f216854f6be3a03cca247202dae7ae883b8c4bce35a3b11bd41568fef41c4 WatchSource:0}: Error finding container 8c5f216854f6be3a03cca247202dae7ae883b8c4bce35a3b11bd41568fef41c4: Status 404 returned error can't find the container with id 8c5f216854f6be3a03cca247202dae7ae883b8c4bce35a3b11bd41568fef41c4 Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.015578 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f67f41ac-4a1d-45c4-baaf-500062871fcb","Type":"ContainerStarted","Data":"4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22"} Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.017136 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.019086 4768 generic.go:334] "Generic (PLEG): container finished" podID="832b8600-b3bb-478f-a494-c4ace355a732" containerID="d7f8c2c98f1774e813d7e6e073329d9d4ce0bba97314115fa6a9ff2d61646888" exitCode=0 Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.019184 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zlg8p-config-qzfhh" event={"ID":"832b8600-b3bb-478f-a494-c4ace355a732","Type":"ContainerDied","Data":"d7f8c2c98f1774e813d7e6e073329d9d4ce0bba97314115fa6a9ff2d61646888"} Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.019373 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zlg8p-config-qzfhh" event={"ID":"832b8600-b3bb-478f-a494-c4ace355a732","Type":"ContainerStarted","Data":"8c5f216854f6be3a03cca247202dae7ae883b8c4bce35a3b11bd41568fef41c4"} Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.052057 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.339608304 podStartE2EDuration="58.052032483s" podCreationTimestamp="2025-11-24 18:04:52 +0000 UTC" firstStartedPulling="2025-11-24 18:05:06.643920326 +0000 UTC m=+945.504502103" lastFinishedPulling="2025-11-24 18:05:14.356344505 +0000 UTC m=+953.216926282" observedRunningTime="2025-11-24 18:05:50.046038595 +0000 UTC m=+988.906620392" watchObservedRunningTime="2025-11-24 18:05:50.052032483 +0000 UTC m=+988.912614270" Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.379540 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dt8hn" Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.391585 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1661-account-create-6bzhc" Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.486395 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d0123e4-321e-46c6-9fad-ab2860c14050-operator-scripts\") pod \"5d0123e4-321e-46c6-9fad-ab2860c14050\" (UID: \"5d0123e4-321e-46c6-9fad-ab2860c14050\") " Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.486520 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwmm9\" (UniqueName: \"kubernetes.io/projected/5d0123e4-321e-46c6-9fad-ab2860c14050-kube-api-access-xwmm9\") pod \"5d0123e4-321e-46c6-9fad-ab2860c14050\" (UID: \"5d0123e4-321e-46c6-9fad-ab2860c14050\") " Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.486548 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae89b236-a8cc-49bc-8ad3-6601f4b97450-operator-scripts\") pod \"ae89b236-a8cc-49bc-8ad3-6601f4b97450\" (UID: \"ae89b236-a8cc-49bc-8ad3-6601f4b97450\") " Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.486624 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmxq9\" (UniqueName: \"kubernetes.io/projected/ae89b236-a8cc-49bc-8ad3-6601f4b97450-kube-api-access-bmxq9\") pod \"ae89b236-a8cc-49bc-8ad3-6601f4b97450\" (UID: \"ae89b236-a8cc-49bc-8ad3-6601f4b97450\") " Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.486964 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d0123e4-321e-46c6-9fad-ab2860c14050-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d0123e4-321e-46c6-9fad-ab2860c14050" (UID: "5d0123e4-321e-46c6-9fad-ab2860c14050"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.487227 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae89b236-a8cc-49bc-8ad3-6601f4b97450-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae89b236-a8cc-49bc-8ad3-6601f4b97450" (UID: "ae89b236-a8cc-49bc-8ad3-6601f4b97450"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.493046 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d0123e4-321e-46c6-9fad-ab2860c14050-kube-api-access-xwmm9" (OuterVolumeSpecName: "kube-api-access-xwmm9") pod "5d0123e4-321e-46c6-9fad-ab2860c14050" (UID: "5d0123e4-321e-46c6-9fad-ab2860c14050"). InnerVolumeSpecName "kube-api-access-xwmm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.495967 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae89b236-a8cc-49bc-8ad3-6601f4b97450-kube-api-access-bmxq9" (OuterVolumeSpecName: "kube-api-access-bmxq9") pod "ae89b236-a8cc-49bc-8ad3-6601f4b97450" (UID: "ae89b236-a8cc-49bc-8ad3-6601f4b97450"). InnerVolumeSpecName "kube-api-access-bmxq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.587912 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d0123e4-321e-46c6-9fad-ab2860c14050-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.587944 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwmm9\" (UniqueName: \"kubernetes.io/projected/5d0123e4-321e-46c6-9fad-ab2860c14050-kube-api-access-xwmm9\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.587957 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae89b236-a8cc-49bc-8ad3-6601f4b97450-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:50 crc kubenswrapper[4768]: I1124 18:05:50.587967 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmxq9\" (UniqueName: \"kubernetes.io/projected/ae89b236-a8cc-49bc-8ad3-6601f4b97450-kube-api-access-bmxq9\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.037843 4768 generic.go:334] "Generic (PLEG): container finished" podID="96e8147b-fab1-4601-b8c7-00764af14ba7" containerID="a4aa0bb200172f83176cd90f33b02eadaee041ecd11044f7965416b7cf3adf3d" exitCode=0 Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.037952 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96e8147b-fab1-4601-b8c7-00764af14ba7","Type":"ContainerDied","Data":"a4aa0bb200172f83176cd90f33b02eadaee041ecd11044f7965416b7cf3adf3d"} Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.039702 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dt8hn" event={"ID":"5d0123e4-321e-46c6-9fad-ab2860c14050","Type":"ContainerDied","Data":"c4b636abfaf28b18ff4b2fc2a166f7850b72d245a496d0af4f839a9ac70b8746"} Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.039761 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4b636abfaf28b18ff4b2fc2a166f7850b72d245a496d0af4f839a9ac70b8746" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.039842 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dt8hn" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.048786 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1661-account-create-6bzhc" event={"ID":"ae89b236-a8cc-49bc-8ad3-6601f4b97450","Type":"ContainerDied","Data":"ef9ceb6c0dbac5acca66ee42c775c8beeae7e299ad4ab2bbacee93d16abd9bc1"} Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.048858 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef9ceb6c0dbac5acca66ee42c775c8beeae7e299ad4ab2bbacee93d16abd9bc1" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.048905 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1661-account-create-6bzhc" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.354719 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.507579 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/832b8600-b3bb-478f-a494-c4ace355a732-additional-scripts\") pod \"832b8600-b3bb-478f-a494-c4ace355a732\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.507677 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-log-ovn\") pod \"832b8600-b3bb-478f-a494-c4ace355a732\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.507735 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2ggc\" (UniqueName: \"kubernetes.io/projected/832b8600-b3bb-478f-a494-c4ace355a732-kube-api-access-w2ggc\") pod \"832b8600-b3bb-478f-a494-c4ace355a732\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.507771 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-run-ovn\") pod \"832b8600-b3bb-478f-a494-c4ace355a732\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.507832 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/832b8600-b3bb-478f-a494-c4ace355a732-scripts\") pod \"832b8600-b3bb-478f-a494-c4ace355a732\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.507885 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-run\") pod \"832b8600-b3bb-478f-a494-c4ace355a732\" (UID: \"832b8600-b3bb-478f-a494-c4ace355a732\") " Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.508190 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "832b8600-b3bb-478f-a494-c4ace355a732" (UID: "832b8600-b3bb-478f-a494-c4ace355a732"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.508200 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "832b8600-b3bb-478f-a494-c4ace355a732" (UID: "832b8600-b3bb-478f-a494-c4ace355a732"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.508279 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-run" (OuterVolumeSpecName: "var-run") pod "832b8600-b3bb-478f-a494-c4ace355a732" (UID: "832b8600-b3bb-478f-a494-c4ace355a732"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.508938 4768 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.508957 4768 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.508972 4768 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/832b8600-b3bb-478f-a494-c4ace355a732-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.509037 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832b8600-b3bb-478f-a494-c4ace355a732-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "832b8600-b3bb-478f-a494-c4ace355a732" (UID: "832b8600-b3bb-478f-a494-c4ace355a732"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.509304 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832b8600-b3bb-478f-a494-c4ace355a732-scripts" (OuterVolumeSpecName: "scripts") pod "832b8600-b3bb-478f-a494-c4ace355a732" (UID: "832b8600-b3bb-478f-a494-c4ace355a732"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.513894 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832b8600-b3bb-478f-a494-c4ace355a732-kube-api-access-w2ggc" (OuterVolumeSpecName: "kube-api-access-w2ggc") pod "832b8600-b3bb-478f-a494-c4ace355a732" (UID: "832b8600-b3bb-478f-a494-c4ace355a732"). InnerVolumeSpecName "kube-api-access-w2ggc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.610751 4768 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/832b8600-b3bb-478f-a494-c4ace355a732-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.610788 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2ggc\" (UniqueName: \"kubernetes.io/projected/832b8600-b3bb-478f-a494-c4ace355a732-kube-api-access-w2ggc\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:51 crc kubenswrapper[4768]: I1124 18:05:51.610805 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/832b8600-b3bb-478f-a494-c4ace355a732-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.058889 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zlg8p-config-qzfhh" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.058888 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zlg8p-config-qzfhh" event={"ID":"832b8600-b3bb-478f-a494-c4ace355a732","Type":"ContainerDied","Data":"8c5f216854f6be3a03cca247202dae7ae883b8c4bce35a3b11bd41568fef41c4"} Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.059022 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c5f216854f6be3a03cca247202dae7ae883b8c4bce35a3b11bd41568fef41c4" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.064994 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96e8147b-fab1-4601-b8c7-00764af14ba7","Type":"ContainerStarted","Data":"fb25e3f1702fc6111b530154f86d335b831c203bc5818f6f7da298bb2061ef6b"} Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.065210 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.095815 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.708624212 podStartE2EDuration="1m0.095786572s" podCreationTimestamp="2025-11-24 18:04:52 +0000 UTC" firstStartedPulling="2025-11-24 18:05:07.07087551 +0000 UTC m=+945.931457287" lastFinishedPulling="2025-11-24 18:05:14.45803787 +0000 UTC m=+953.318619647" observedRunningTime="2025-11-24 18:05:52.088448942 +0000 UTC m=+990.949030719" watchObservedRunningTime="2025-11-24 18:05:52.095786572 +0000 UTC m=+990.956368349" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.470123 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zlg8p-config-qzfhh"] Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.475544 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zlg8p-config-qzfhh"] Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.555705 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zlg8p-config-k68pt"] Nov 24 18:05:52 crc kubenswrapper[4768]: E1124 18:05:52.556137 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae89b236-a8cc-49bc-8ad3-6601f4b97450" containerName="mariadb-account-create" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.556154 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae89b236-a8cc-49bc-8ad3-6601f4b97450" containerName="mariadb-account-create" Nov 24 18:05:52 crc kubenswrapper[4768]: E1124 18:05:52.556163 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0123e4-321e-46c6-9fad-ab2860c14050" containerName="mariadb-database-create" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.556170 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0123e4-321e-46c6-9fad-ab2860c14050" containerName="mariadb-database-create" Nov 24 18:05:52 crc kubenswrapper[4768]: E1124 18:05:52.556181 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832b8600-b3bb-478f-a494-c4ace355a732" containerName="ovn-config" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.556187 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="832b8600-b3bb-478f-a494-c4ace355a732" containerName="ovn-config" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.556346 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0123e4-321e-46c6-9fad-ab2860c14050" containerName="mariadb-database-create" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.556357 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="832b8600-b3bb-478f-a494-c4ace355a732" containerName="ovn-config" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.556370 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae89b236-a8cc-49bc-8ad3-6601f4b97450" containerName="mariadb-account-create" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.556962 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.559447 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.574481 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zlg8p-config-k68pt"] Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.738473 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-run\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.738589 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-additional-scripts\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.738644 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-log-ovn\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.738680 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-run-ovn\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.738810 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khjnt\" (UniqueName: \"kubernetes.io/projected/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-kube-api-access-khjnt\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.738942 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-scripts\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.840724 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-run\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.840802 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-additional-scripts\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.840843 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-log-ovn\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.840871 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-run-ovn\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.840890 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khjnt\" (UniqueName: \"kubernetes.io/projected/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-kube-api-access-khjnt\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.840932 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-scripts\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.841157 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-run\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.841247 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-log-ovn\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.841359 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-run-ovn\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.841663 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-additional-scripts\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.843191 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-scripts\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.874990 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khjnt\" (UniqueName: \"kubernetes.io/projected/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-kube-api-access-khjnt\") pod \"ovn-controller-zlg8p-config-k68pt\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:52 crc kubenswrapper[4768]: I1124 18:05:52.877911 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:05:53 crc kubenswrapper[4768]: I1124 18:05:53.117104 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zlg8p" Nov 24 18:05:53 crc kubenswrapper[4768]: I1124 18:05:53.303381 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zlg8p-config-k68pt"] Nov 24 18:05:53 crc kubenswrapper[4768]: W1124 18:05:53.309785 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa2239e1_9605_4b3b_b6db_3b1cac1369bb.slice/crio-27d645987a5baf9948b1503a416983dda1f0001d5f4fc3d5132cc59aec10f068 WatchSource:0}: Error finding container 27d645987a5baf9948b1503a416983dda1f0001d5f4fc3d5132cc59aec10f068: Status 404 returned error can't find the container with id 27d645987a5baf9948b1503a416983dda1f0001d5f4fc3d5132cc59aec10f068 Nov 24 18:05:53 crc kubenswrapper[4768]: I1124 18:05:53.909395 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832b8600-b3bb-478f-a494-c4ace355a732" path="/var/lib/kubelet/pods/832b8600-b3bb-478f-a494-c4ace355a732/volumes" Nov 24 18:05:54 crc kubenswrapper[4768]: I1124 18:05:54.092888 4768 generic.go:334] "Generic (PLEG): container finished" podID="aa2239e1-9605-4b3b-b6db-3b1cac1369bb" containerID="d0cce081462ef1068ce1f43d1a38b3ba1170cd30a01d5c10b9e84d42ae4556ba" exitCode=0 Nov 24 18:05:54 crc kubenswrapper[4768]: I1124 18:05:54.092999 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zlg8p-config-k68pt" event={"ID":"aa2239e1-9605-4b3b-b6db-3b1cac1369bb","Type":"ContainerDied","Data":"d0cce081462ef1068ce1f43d1a38b3ba1170cd30a01d5c10b9e84d42ae4556ba"} Nov 24 18:05:54 crc kubenswrapper[4768]: I1124 18:05:54.093260 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zlg8p-config-k68pt" event={"ID":"aa2239e1-9605-4b3b-b6db-3b1cac1369bb","Type":"ContainerStarted","Data":"27d645987a5baf9948b1503a416983dda1f0001d5f4fc3d5132cc59aec10f068"} Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.148575 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.164211 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zlg8p-config-k68pt" event={"ID":"aa2239e1-9605-4b3b-b6db-3b1cac1369bb","Type":"ContainerDied","Data":"27d645987a5baf9948b1503a416983dda1f0001d5f4fc3d5132cc59aec10f068"} Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.164299 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27d645987a5baf9948b1503a416983dda1f0001d5f4fc3d5132cc59aec10f068" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.164481 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zlg8p-config-k68pt" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.298669 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-run-ovn\") pod \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.298729 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-scripts\") pod \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.298798 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-log-ovn\") pod \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.298916 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khjnt\" (UniqueName: \"kubernetes.io/projected/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-kube-api-access-khjnt\") pod \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.299001 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-additional-scripts\") pod \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.299342 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-run\") pod \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\" (UID: \"aa2239e1-9605-4b3b-b6db-3b1cac1369bb\") " Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.299563 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "aa2239e1-9605-4b3b-b6db-3b1cac1369bb" (UID: "aa2239e1-9605-4b3b-b6db-3b1cac1369bb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.300260 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-scripts" (OuterVolumeSpecName: "scripts") pod "aa2239e1-9605-4b3b-b6db-3b1cac1369bb" (UID: "aa2239e1-9605-4b3b-b6db-3b1cac1369bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.300678 4768 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.300745 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.300860 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-run" (OuterVolumeSpecName: "var-run") pod "aa2239e1-9605-4b3b-b6db-3b1cac1369bb" (UID: "aa2239e1-9605-4b3b-b6db-3b1cac1369bb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.301256 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "aa2239e1-9605-4b3b-b6db-3b1cac1369bb" (UID: "aa2239e1-9605-4b3b-b6db-3b1cac1369bb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.302297 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "aa2239e1-9605-4b3b-b6db-3b1cac1369bb" (UID: "aa2239e1-9605-4b3b-b6db-3b1cac1369bb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.304395 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-kube-api-access-khjnt" (OuterVolumeSpecName: "kube-api-access-khjnt") pod "aa2239e1-9605-4b3b-b6db-3b1cac1369bb" (UID: "aa2239e1-9605-4b3b-b6db-3b1cac1369bb"). InnerVolumeSpecName "kube-api-access-khjnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.402381 4768 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.402427 4768 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.402441 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khjnt\" (UniqueName: \"kubernetes.io/projected/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-kube-api-access-khjnt\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:01 crc kubenswrapper[4768]: I1124 18:06:01.402458 4768 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aa2239e1-9605-4b3b-b6db-3b1cac1369bb-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:02 crc kubenswrapper[4768]: I1124 18:06:02.172773 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-q6fzj" event={"ID":"6d3c858f-af78-4df6-b30a-b7921b5a80f3","Type":"ContainerStarted","Data":"fae19a5ef71d853c1657876c58cf71ca2f4bee33723872d93533aa2608ff41ba"} Nov 24 18:06:02 crc kubenswrapper[4768]: I1124 18:06:02.194514 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-q6fzj" podStartSLOduration=2.678077176 podStartE2EDuration="15.194474159s" podCreationTimestamp="2025-11-24 18:05:47 +0000 UTC" firstStartedPulling="2025-11-24 18:05:48.666645498 +0000 UTC m=+987.527227275" lastFinishedPulling="2025-11-24 18:06:01.183042471 +0000 UTC m=+1000.043624258" observedRunningTime="2025-11-24 18:06:02.187761804 +0000 UTC m=+1001.048343571" watchObservedRunningTime="2025-11-24 18:06:02.194474159 +0000 UTC m=+1001.055055936" Nov 24 18:06:02 crc kubenswrapper[4768]: I1124 18:06:02.245745 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zlg8p-config-k68pt"] Nov 24 18:06:02 crc kubenswrapper[4768]: I1124 18:06:02.251666 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zlg8p-config-k68pt"] Nov 24 18:06:03 crc kubenswrapper[4768]: I1124 18:06:03.863732 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 18:06:03 crc kubenswrapper[4768]: I1124 18:06:03.909052 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa2239e1-9605-4b3b-b6db-3b1cac1369bb" path="/var/lib/kubelet/pods/aa2239e1-9605-4b3b-b6db-3b1cac1369bb/volumes" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.143758 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.218931 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fjmk9"] Nov 24 18:06:04 crc kubenswrapper[4768]: E1124 18:06:04.219248 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2239e1-9605-4b3b-b6db-3b1cac1369bb" containerName="ovn-config" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.219260 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2239e1-9605-4b3b-b6db-3b1cac1369bb" containerName="ovn-config" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.219420 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa2239e1-9605-4b3b-b6db-3b1cac1369bb" containerName="ovn-config" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.220053 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fjmk9" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.245085 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fjmk9"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.302837 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-fwkxc"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.310622 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fwkxc" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.343942 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-af38-account-create-rns5f"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.345203 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-af38-account-create-rns5f" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.350996 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.354579 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96d3e000-d092-48ca-bf36-ecbb55cf016b-operator-scripts\") pod \"cinder-db-create-fjmk9\" (UID: \"96d3e000-d092-48ca-bf36-ecbb55cf016b\") " pod="openstack/cinder-db-create-fjmk9" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.354645 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfzj4\" (UniqueName: \"kubernetes.io/projected/96d3e000-d092-48ca-bf36-ecbb55cf016b-kube-api-access-zfzj4\") pod \"cinder-db-create-fjmk9\" (UID: \"96d3e000-d092-48ca-bf36-ecbb55cf016b\") " pod="openstack/cinder-db-create-fjmk9" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.361969 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fwkxc"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.366149 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-af38-account-create-rns5f"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.403406 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9e79-account-create-r62zt"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.405593 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9e79-account-create-r62zt" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.408407 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.413377 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9e79-account-create-r62zt"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.456635 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e1e485-bf18-48d5-bb34-f213b5680994-operator-scripts\") pod \"cinder-af38-account-create-rns5f\" (UID: \"b7e1e485-bf18-48d5-bb34-f213b5680994\") " pod="openstack/cinder-af38-account-create-rns5f" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.456697 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557ae8bd-5ad0-4822-bff1-6274e4523aa0-operator-scripts\") pod \"barbican-db-create-fwkxc\" (UID: \"557ae8bd-5ad0-4822-bff1-6274e4523aa0\") " pod="openstack/barbican-db-create-fwkxc" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.456870 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96d3e000-d092-48ca-bf36-ecbb55cf016b-operator-scripts\") pod \"cinder-db-create-fjmk9\" (UID: \"96d3e000-d092-48ca-bf36-ecbb55cf016b\") " pod="openstack/cinder-db-create-fjmk9" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.456944 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlkxv\" (UniqueName: \"kubernetes.io/projected/b7e1e485-bf18-48d5-bb34-f213b5680994-kube-api-access-xlkxv\") pod \"cinder-af38-account-create-rns5f\" (UID: \"b7e1e485-bf18-48d5-bb34-f213b5680994\") " pod="openstack/cinder-af38-account-create-rns5f" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.457003 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfzj4\" (UniqueName: \"kubernetes.io/projected/96d3e000-d092-48ca-bf36-ecbb55cf016b-kube-api-access-zfzj4\") pod \"cinder-db-create-fjmk9\" (UID: \"96d3e000-d092-48ca-bf36-ecbb55cf016b\") " pod="openstack/cinder-db-create-fjmk9" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.457040 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmrmx\" (UniqueName: \"kubernetes.io/projected/557ae8bd-5ad0-4822-bff1-6274e4523aa0-kube-api-access-mmrmx\") pod \"barbican-db-create-fwkxc\" (UID: \"557ae8bd-5ad0-4822-bff1-6274e4523aa0\") " pod="openstack/barbican-db-create-fwkxc" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.457573 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96d3e000-d092-48ca-bf36-ecbb55cf016b-operator-scripts\") pod \"cinder-db-create-fjmk9\" (UID: \"96d3e000-d092-48ca-bf36-ecbb55cf016b\") " pod="openstack/cinder-db-create-fjmk9" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.492588 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfzj4\" (UniqueName: \"kubernetes.io/projected/96d3e000-d092-48ca-bf36-ecbb55cf016b-kube-api-access-zfzj4\") pod \"cinder-db-create-fjmk9\" (UID: \"96d3e000-d092-48ca-bf36-ecbb55cf016b\") " pod="openstack/cinder-db-create-fjmk9" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.492601 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qqlpp"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.493684 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.497888 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.501794 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.501804 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gql6l" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.501803 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.504123 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rknvx"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.505239 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rknvx" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.513148 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qqlpp"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.539577 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fjmk9" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.546140 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rknvx"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.561112 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8rzh\" (UniqueName: \"kubernetes.io/projected/382cc76c-7ba2-45f4-898c-10608b068c36-kube-api-access-f8rzh\") pod \"barbican-9e79-account-create-r62zt\" (UID: \"382cc76c-7ba2-45f4-898c-10608b068c36\") " pod="openstack/barbican-9e79-account-create-r62zt" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.561177 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e1e485-bf18-48d5-bb34-f213b5680994-operator-scripts\") pod \"cinder-af38-account-create-rns5f\" (UID: \"b7e1e485-bf18-48d5-bb34-f213b5680994\") " pod="openstack/cinder-af38-account-create-rns5f" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.561200 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/382cc76c-7ba2-45f4-898c-10608b068c36-operator-scripts\") pod \"barbican-9e79-account-create-r62zt\" (UID: \"382cc76c-7ba2-45f4-898c-10608b068c36\") " pod="openstack/barbican-9e79-account-create-r62zt" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.561237 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557ae8bd-5ad0-4822-bff1-6274e4523aa0-operator-scripts\") pod \"barbican-db-create-fwkxc\" (UID: \"557ae8bd-5ad0-4822-bff1-6274e4523aa0\") " pod="openstack/barbican-db-create-fwkxc" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.561324 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlkxv\" (UniqueName: \"kubernetes.io/projected/b7e1e485-bf18-48d5-bb34-f213b5680994-kube-api-access-xlkxv\") pod \"cinder-af38-account-create-rns5f\" (UID: \"b7e1e485-bf18-48d5-bb34-f213b5680994\") " pod="openstack/cinder-af38-account-create-rns5f" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.561352 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmrmx\" (UniqueName: \"kubernetes.io/projected/557ae8bd-5ad0-4822-bff1-6274e4523aa0-kube-api-access-mmrmx\") pod \"barbican-db-create-fwkxc\" (UID: \"557ae8bd-5ad0-4822-bff1-6274e4523aa0\") " pod="openstack/barbican-db-create-fwkxc" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.562212 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e1e485-bf18-48d5-bb34-f213b5680994-operator-scripts\") pod \"cinder-af38-account-create-rns5f\" (UID: \"b7e1e485-bf18-48d5-bb34-f213b5680994\") " pod="openstack/cinder-af38-account-create-rns5f" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.562687 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557ae8bd-5ad0-4822-bff1-6274e4523aa0-operator-scripts\") pod \"barbican-db-create-fwkxc\" (UID: \"557ae8bd-5ad0-4822-bff1-6274e4523aa0\") " pod="openstack/barbican-db-create-fwkxc" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.592280 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmrmx\" (UniqueName: \"kubernetes.io/projected/557ae8bd-5ad0-4822-bff1-6274e4523aa0-kube-api-access-mmrmx\") pod \"barbican-db-create-fwkxc\" (UID: \"557ae8bd-5ad0-4822-bff1-6274e4523aa0\") " pod="openstack/barbican-db-create-fwkxc" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.592628 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlkxv\" (UniqueName: \"kubernetes.io/projected/b7e1e485-bf18-48d5-bb34-f213b5680994-kube-api-access-xlkxv\") pod \"cinder-af38-account-create-rns5f\" (UID: \"b7e1e485-bf18-48d5-bb34-f213b5680994\") " pod="openstack/cinder-af38-account-create-rns5f" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.647638 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fwkxc" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.662890 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8rzh\" (UniqueName: \"kubernetes.io/projected/382cc76c-7ba2-45f4-898c-10608b068c36-kube-api-access-f8rzh\") pod \"barbican-9e79-account-create-r62zt\" (UID: \"382cc76c-7ba2-45f4-898c-10608b068c36\") " pod="openstack/barbican-9e79-account-create-r62zt" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.662964 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xffg\" (UniqueName: \"kubernetes.io/projected/0272e837-2dbf-4eca-bbf5-c33af7822bd2-kube-api-access-5xffg\") pod \"neutron-db-create-rknvx\" (UID: \"0272e837-2dbf-4eca-bbf5-c33af7822bd2\") " pod="openstack/neutron-db-create-rknvx" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.663005 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/382cc76c-7ba2-45f4-898c-10608b068c36-operator-scripts\") pod \"barbican-9e79-account-create-r62zt\" (UID: \"382cc76c-7ba2-45f4-898c-10608b068c36\") " pod="openstack/barbican-9e79-account-create-r62zt" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.663471 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e445a9-a207-41eb-816d-de70c981c8c2-config-data\") pod \"keystone-db-sync-qqlpp\" (UID: \"65e445a9-a207-41eb-816d-de70c981c8c2\") " pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.663543 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzfd4\" (UniqueName: \"kubernetes.io/projected/65e445a9-a207-41eb-816d-de70c981c8c2-kube-api-access-jzfd4\") pod \"keystone-db-sync-qqlpp\" (UID: \"65e445a9-a207-41eb-816d-de70c981c8c2\") " pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.663585 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e445a9-a207-41eb-816d-de70c981c8c2-combined-ca-bundle\") pod \"keystone-db-sync-qqlpp\" (UID: \"65e445a9-a207-41eb-816d-de70c981c8c2\") " pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.663660 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0272e837-2dbf-4eca-bbf5-c33af7822bd2-operator-scripts\") pod \"neutron-db-create-rknvx\" (UID: \"0272e837-2dbf-4eca-bbf5-c33af7822bd2\") " pod="openstack/neutron-db-create-rknvx" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.670370 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/382cc76c-7ba2-45f4-898c-10608b068c36-operator-scripts\") pod \"barbican-9e79-account-create-r62zt\" (UID: \"382cc76c-7ba2-45f4-898c-10608b068c36\") " pod="openstack/barbican-9e79-account-create-r62zt" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.670586 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-af38-account-create-rns5f" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.709615 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8rzh\" (UniqueName: \"kubernetes.io/projected/382cc76c-7ba2-45f4-898c-10608b068c36-kube-api-access-f8rzh\") pod \"barbican-9e79-account-create-r62zt\" (UID: \"382cc76c-7ba2-45f4-898c-10608b068c36\") " pod="openstack/barbican-9e79-account-create-r62zt" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.723713 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9e79-account-create-r62zt" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.737167 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-05c6-account-create-dxr7l"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.738279 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05c6-account-create-dxr7l" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.740760 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.746029 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-05c6-account-create-dxr7l"] Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.771179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e445a9-a207-41eb-816d-de70c981c8c2-combined-ca-bundle\") pod \"keystone-db-sync-qqlpp\" (UID: \"65e445a9-a207-41eb-816d-de70c981c8c2\") " pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.771286 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0272e837-2dbf-4eca-bbf5-c33af7822bd2-operator-scripts\") pod \"neutron-db-create-rknvx\" (UID: \"0272e837-2dbf-4eca-bbf5-c33af7822bd2\") " pod="openstack/neutron-db-create-rknvx" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.771420 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xffg\" (UniqueName: \"kubernetes.io/projected/0272e837-2dbf-4eca-bbf5-c33af7822bd2-kube-api-access-5xffg\") pod \"neutron-db-create-rknvx\" (UID: \"0272e837-2dbf-4eca-bbf5-c33af7822bd2\") " pod="openstack/neutron-db-create-rknvx" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.771558 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e445a9-a207-41eb-816d-de70c981c8c2-config-data\") pod \"keystone-db-sync-qqlpp\" (UID: \"65e445a9-a207-41eb-816d-de70c981c8c2\") " pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.771700 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzfd4\" (UniqueName: \"kubernetes.io/projected/65e445a9-a207-41eb-816d-de70c981c8c2-kube-api-access-jzfd4\") pod \"keystone-db-sync-qqlpp\" (UID: \"65e445a9-a207-41eb-816d-de70c981c8c2\") " pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.772954 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0272e837-2dbf-4eca-bbf5-c33af7822bd2-operator-scripts\") pod \"neutron-db-create-rknvx\" (UID: \"0272e837-2dbf-4eca-bbf5-c33af7822bd2\") " pod="openstack/neutron-db-create-rknvx" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.775541 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e445a9-a207-41eb-816d-de70c981c8c2-config-data\") pod \"keystone-db-sync-qqlpp\" (UID: \"65e445a9-a207-41eb-816d-de70c981c8c2\") " pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.781837 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e445a9-a207-41eb-816d-de70c981c8c2-combined-ca-bundle\") pod \"keystone-db-sync-qqlpp\" (UID: \"65e445a9-a207-41eb-816d-de70c981c8c2\") " pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.788025 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzfd4\" (UniqueName: \"kubernetes.io/projected/65e445a9-a207-41eb-816d-de70c981c8c2-kube-api-access-jzfd4\") pod \"keystone-db-sync-qqlpp\" (UID: \"65e445a9-a207-41eb-816d-de70c981c8c2\") " pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.789519 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xffg\" (UniqueName: \"kubernetes.io/projected/0272e837-2dbf-4eca-bbf5-c33af7822bd2-kube-api-access-5xffg\") pod \"neutron-db-create-rknvx\" (UID: \"0272e837-2dbf-4eca-bbf5-c33af7822bd2\") " pod="openstack/neutron-db-create-rknvx" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.873731 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpnll\" (UniqueName: \"kubernetes.io/projected/c3159486-5491-4468-b849-04e91c41b248-kube-api-access-vpnll\") pod \"neutron-05c6-account-create-dxr7l\" (UID: \"c3159486-5491-4468-b849-04e91c41b248\") " pod="openstack/neutron-05c6-account-create-dxr7l" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.873818 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3159486-5491-4468-b849-04e91c41b248-operator-scripts\") pod \"neutron-05c6-account-create-dxr7l\" (UID: \"c3159486-5491-4468-b849-04e91c41b248\") " pod="openstack/neutron-05c6-account-create-dxr7l" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.970698 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.977123 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpnll\" (UniqueName: \"kubernetes.io/projected/c3159486-5491-4468-b849-04e91c41b248-kube-api-access-vpnll\") pod \"neutron-05c6-account-create-dxr7l\" (UID: \"c3159486-5491-4468-b849-04e91c41b248\") " pod="openstack/neutron-05c6-account-create-dxr7l" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.977217 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3159486-5491-4468-b849-04e91c41b248-operator-scripts\") pod \"neutron-05c6-account-create-dxr7l\" (UID: \"c3159486-5491-4468-b849-04e91c41b248\") " pod="openstack/neutron-05c6-account-create-dxr7l" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.978835 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3159486-5491-4468-b849-04e91c41b248-operator-scripts\") pod \"neutron-05c6-account-create-dxr7l\" (UID: \"c3159486-5491-4468-b849-04e91c41b248\") " pod="openstack/neutron-05c6-account-create-dxr7l" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.988824 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rknvx" Nov 24 18:06:04 crc kubenswrapper[4768]: I1124 18:06:04.995853 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fjmk9"] Nov 24 18:06:05 crc kubenswrapper[4768]: I1124 18:06:05.000616 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpnll\" (UniqueName: \"kubernetes.io/projected/c3159486-5491-4468-b849-04e91c41b248-kube-api-access-vpnll\") pod \"neutron-05c6-account-create-dxr7l\" (UID: \"c3159486-5491-4468-b849-04e91c41b248\") " pod="openstack/neutron-05c6-account-create-dxr7l" Nov 24 18:06:05 crc kubenswrapper[4768]: I1124 18:06:05.061642 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05c6-account-create-dxr7l" Nov 24 18:06:05 crc kubenswrapper[4768]: I1124 18:06:05.216282 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fjmk9" event={"ID":"96d3e000-d092-48ca-bf36-ecbb55cf016b","Type":"ContainerStarted","Data":"2956408098b6b8364c9f47196fdd1e1d86c2f8d47417603c7c727f1b8eaa1eed"} Nov 24 18:06:05 crc kubenswrapper[4768]: I1124 18:06:05.216645 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9e79-account-create-r62zt"] Nov 24 18:06:05 crc kubenswrapper[4768]: I1124 18:06:05.274461 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-af38-account-create-rns5f"] Nov 24 18:06:05 crc kubenswrapper[4768]: I1124 18:06:05.366625 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fwkxc"] Nov 24 18:06:05 crc kubenswrapper[4768]: I1124 18:06:05.557730 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rknvx"] Nov 24 18:06:05 crc kubenswrapper[4768]: I1124 18:06:05.637104 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qqlpp"] Nov 24 18:06:05 crc kubenswrapper[4768]: W1124 18:06:05.658829 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65e445a9_a207_41eb_816d_de70c981c8c2.slice/crio-841f088fd8be41c2945b867a3724fdb9c06192240f7da73e7b514ea06de18065 WatchSource:0}: Error finding container 841f088fd8be41c2945b867a3724fdb9c06192240f7da73e7b514ea06de18065: Status 404 returned error can't find the container with id 841f088fd8be41c2945b867a3724fdb9c06192240f7da73e7b514ea06de18065 Nov 24 18:06:05 crc kubenswrapper[4768]: I1124 18:06:05.697929 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-05c6-account-create-dxr7l"] Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.233947 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05c6-account-create-dxr7l" event={"ID":"c3159486-5491-4468-b849-04e91c41b248","Type":"ContainerStarted","Data":"11f244794baf49fd0d4b90ddcd02bc2fed357939076bd47a0dcff10fe7323daf"} Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.234271 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05c6-account-create-dxr7l" event={"ID":"c3159486-5491-4468-b849-04e91c41b248","Type":"ContainerStarted","Data":"f9eb7068f5d2d9b9290c290c188970efa9041e906a347ec61796d0e13c992a33"} Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.235556 4768 generic.go:334] "Generic (PLEG): container finished" podID="557ae8bd-5ad0-4822-bff1-6274e4523aa0" containerID="98d4b5745c9842133ca4de202d94ad1e0b87da5c8edd35c61ba1ada393f9112a" exitCode=0 Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.235669 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fwkxc" event={"ID":"557ae8bd-5ad0-4822-bff1-6274e4523aa0","Type":"ContainerDied","Data":"98d4b5745c9842133ca4de202d94ad1e0b87da5c8edd35c61ba1ada393f9112a"} Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.235689 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fwkxc" event={"ID":"557ae8bd-5ad0-4822-bff1-6274e4523aa0","Type":"ContainerStarted","Data":"79ea07f6c3ff57f8eb7fde6b8a82bca08f583f05717fb11ecd9d14099462152c"} Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.237104 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rknvx" event={"ID":"0272e837-2dbf-4eca-bbf5-c33af7822bd2","Type":"ContainerStarted","Data":"d481d9d9973fd0b10ed8a5599ecd13af671880b54e2a552bb7f675baf9f44e88"} Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.237159 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rknvx" event={"ID":"0272e837-2dbf-4eca-bbf5-c33af7822bd2","Type":"ContainerStarted","Data":"2edf6e2f0a2edb82f9ee0446581fd5a033051afb80d4625bdc079f4c9d4ad5a1"} Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.239431 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qqlpp" event={"ID":"65e445a9-a207-41eb-816d-de70c981c8c2","Type":"ContainerStarted","Data":"841f088fd8be41c2945b867a3724fdb9c06192240f7da73e7b514ea06de18065"} Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.242064 4768 generic.go:334] "Generic (PLEG): container finished" podID="96d3e000-d092-48ca-bf36-ecbb55cf016b" containerID="85c0bcee147f1445d747909ca66bc48a682424d8ecf1c3ecfb8bdff98dd20509" exitCode=0 Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.242120 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fjmk9" event={"ID":"96d3e000-d092-48ca-bf36-ecbb55cf016b","Type":"ContainerDied","Data":"85c0bcee147f1445d747909ca66bc48a682424d8ecf1c3ecfb8bdff98dd20509"} Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.243439 4768 generic.go:334] "Generic (PLEG): container finished" podID="b7e1e485-bf18-48d5-bb34-f213b5680994" containerID="7f581c1550e211cb9c91983d61d27d048cd945db28c3e2c53c13de1821fe0993" exitCode=0 Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.243492 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-af38-account-create-rns5f" event={"ID":"b7e1e485-bf18-48d5-bb34-f213b5680994","Type":"ContainerDied","Data":"7f581c1550e211cb9c91983d61d27d048cd945db28c3e2c53c13de1821fe0993"} Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.243508 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-af38-account-create-rns5f" event={"ID":"b7e1e485-bf18-48d5-bb34-f213b5680994","Type":"ContainerStarted","Data":"b051102ce2fa4e8e5458acd72f11b88ad191111d33bd8b34701d2aad928b7760"} Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.244652 4768 generic.go:334] "Generic (PLEG): container finished" podID="382cc76c-7ba2-45f4-898c-10608b068c36" containerID="dc47d00201cad57cca1bbec85467872c252113a2bb20be002f376408a9bd60d3" exitCode=0 Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.244678 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9e79-account-create-r62zt" event={"ID":"382cc76c-7ba2-45f4-898c-10608b068c36","Type":"ContainerDied","Data":"dc47d00201cad57cca1bbec85467872c252113a2bb20be002f376408a9bd60d3"} Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.244690 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9e79-account-create-r62zt" event={"ID":"382cc76c-7ba2-45f4-898c-10608b068c36","Type":"ContainerStarted","Data":"fc797a11c4405f71619ef96775d735b541a052d19236a39bba29e6c8ed3f6388"} Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.268209 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-05c6-account-create-dxr7l" podStartSLOduration=2.268171788 podStartE2EDuration="2.268171788s" podCreationTimestamp="2025-11-24 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:06.255041657 +0000 UTC m=+1005.115623434" watchObservedRunningTime="2025-11-24 18:06:06.268171788 +0000 UTC m=+1005.128753565" Nov 24 18:06:06 crc kubenswrapper[4768]: I1124 18:06:06.340287 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-rknvx" podStartSLOduration=2.34026176 podStartE2EDuration="2.34026176s" podCreationTimestamp="2025-11-24 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:06.331691469 +0000 UTC m=+1005.192273236" watchObservedRunningTime="2025-11-24 18:06:06.34026176 +0000 UTC m=+1005.200843537" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.260027 4768 generic.go:334] "Generic (PLEG): container finished" podID="c3159486-5491-4468-b849-04e91c41b248" containerID="11f244794baf49fd0d4b90ddcd02bc2fed357939076bd47a0dcff10fe7323daf" exitCode=0 Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.260129 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05c6-account-create-dxr7l" event={"ID":"c3159486-5491-4468-b849-04e91c41b248","Type":"ContainerDied","Data":"11f244794baf49fd0d4b90ddcd02bc2fed357939076bd47a0dcff10fe7323daf"} Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.263236 4768 generic.go:334] "Generic (PLEG): container finished" podID="0272e837-2dbf-4eca-bbf5-c33af7822bd2" containerID="d481d9d9973fd0b10ed8a5599ecd13af671880b54e2a552bb7f675baf9f44e88" exitCode=0 Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.263428 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rknvx" event={"ID":"0272e837-2dbf-4eca-bbf5-c33af7822bd2","Type":"ContainerDied","Data":"d481d9d9973fd0b10ed8a5599ecd13af671880b54e2a552bb7f675baf9f44e88"} Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.594784 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fjmk9" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.635367 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfzj4\" (UniqueName: \"kubernetes.io/projected/96d3e000-d092-48ca-bf36-ecbb55cf016b-kube-api-access-zfzj4\") pod \"96d3e000-d092-48ca-bf36-ecbb55cf016b\" (UID: \"96d3e000-d092-48ca-bf36-ecbb55cf016b\") " Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.635853 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96d3e000-d092-48ca-bf36-ecbb55cf016b-operator-scripts\") pod \"96d3e000-d092-48ca-bf36-ecbb55cf016b\" (UID: \"96d3e000-d092-48ca-bf36-ecbb55cf016b\") " Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.637364 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96d3e000-d092-48ca-bf36-ecbb55cf016b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96d3e000-d092-48ca-bf36-ecbb55cf016b" (UID: "96d3e000-d092-48ca-bf36-ecbb55cf016b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.668596 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d3e000-d092-48ca-bf36-ecbb55cf016b-kube-api-access-zfzj4" (OuterVolumeSpecName: "kube-api-access-zfzj4") pod "96d3e000-d092-48ca-bf36-ecbb55cf016b" (UID: "96d3e000-d092-48ca-bf36-ecbb55cf016b"). InnerVolumeSpecName "kube-api-access-zfzj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.737072 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfzj4\" (UniqueName: \"kubernetes.io/projected/96d3e000-d092-48ca-bf36-ecbb55cf016b-kube-api-access-zfzj4\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.737109 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96d3e000-d092-48ca-bf36-ecbb55cf016b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.769804 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-af38-account-create-rns5f" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.776820 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fwkxc" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.782895 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9e79-account-create-r62zt" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.941234 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e1e485-bf18-48d5-bb34-f213b5680994-operator-scripts\") pod \"b7e1e485-bf18-48d5-bb34-f213b5680994\" (UID: \"b7e1e485-bf18-48d5-bb34-f213b5680994\") " Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.941385 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/382cc76c-7ba2-45f4-898c-10608b068c36-operator-scripts\") pod \"382cc76c-7ba2-45f4-898c-10608b068c36\" (UID: \"382cc76c-7ba2-45f4-898c-10608b068c36\") " Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.941446 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmrmx\" (UniqueName: \"kubernetes.io/projected/557ae8bd-5ad0-4822-bff1-6274e4523aa0-kube-api-access-mmrmx\") pod \"557ae8bd-5ad0-4822-bff1-6274e4523aa0\" (UID: \"557ae8bd-5ad0-4822-bff1-6274e4523aa0\") " Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.941539 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8rzh\" (UniqueName: \"kubernetes.io/projected/382cc76c-7ba2-45f4-898c-10608b068c36-kube-api-access-f8rzh\") pod \"382cc76c-7ba2-45f4-898c-10608b068c36\" (UID: \"382cc76c-7ba2-45f4-898c-10608b068c36\") " Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.941746 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlkxv\" (UniqueName: \"kubernetes.io/projected/b7e1e485-bf18-48d5-bb34-f213b5680994-kube-api-access-xlkxv\") pod \"b7e1e485-bf18-48d5-bb34-f213b5680994\" (UID: \"b7e1e485-bf18-48d5-bb34-f213b5680994\") " Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.941857 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557ae8bd-5ad0-4822-bff1-6274e4523aa0-operator-scripts\") pod \"557ae8bd-5ad0-4822-bff1-6274e4523aa0\" (UID: \"557ae8bd-5ad0-4822-bff1-6274e4523aa0\") " Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.942185 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e1e485-bf18-48d5-bb34-f213b5680994-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7e1e485-bf18-48d5-bb34-f213b5680994" (UID: "b7e1e485-bf18-48d5-bb34-f213b5680994"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.942232 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/382cc76c-7ba2-45f4-898c-10608b068c36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "382cc76c-7ba2-45f4-898c-10608b068c36" (UID: "382cc76c-7ba2-45f4-898c-10608b068c36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.942675 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557ae8bd-5ad0-4822-bff1-6274e4523aa0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "557ae8bd-5ad0-4822-bff1-6274e4523aa0" (UID: "557ae8bd-5ad0-4822-bff1-6274e4523aa0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.944111 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557ae8bd-5ad0-4822-bff1-6274e4523aa0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.945361 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382cc76c-7ba2-45f4-898c-10608b068c36-kube-api-access-f8rzh" (OuterVolumeSpecName: "kube-api-access-f8rzh") pod "382cc76c-7ba2-45f4-898c-10608b068c36" (UID: "382cc76c-7ba2-45f4-898c-10608b068c36"). InnerVolumeSpecName "kube-api-access-f8rzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.945956 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557ae8bd-5ad0-4822-bff1-6274e4523aa0-kube-api-access-mmrmx" (OuterVolumeSpecName: "kube-api-access-mmrmx") pod "557ae8bd-5ad0-4822-bff1-6274e4523aa0" (UID: "557ae8bd-5ad0-4822-bff1-6274e4523aa0"). InnerVolumeSpecName "kube-api-access-mmrmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:07 crc kubenswrapper[4768]: I1124 18:06:07.946355 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e1e485-bf18-48d5-bb34-f213b5680994-kube-api-access-xlkxv" (OuterVolumeSpecName: "kube-api-access-xlkxv") pod "b7e1e485-bf18-48d5-bb34-f213b5680994" (UID: "b7e1e485-bf18-48d5-bb34-f213b5680994"). InnerVolumeSpecName "kube-api-access-xlkxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.045038 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlkxv\" (UniqueName: \"kubernetes.io/projected/b7e1e485-bf18-48d5-bb34-f213b5680994-kube-api-access-xlkxv\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.045447 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e1e485-bf18-48d5-bb34-f213b5680994-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.045464 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/382cc76c-7ba2-45f4-898c-10608b068c36-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.045473 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmrmx\" (UniqueName: \"kubernetes.io/projected/557ae8bd-5ad0-4822-bff1-6274e4523aa0-kube-api-access-mmrmx\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.045505 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8rzh\" (UniqueName: \"kubernetes.io/projected/382cc76c-7ba2-45f4-898c-10608b068c36-kube-api-access-f8rzh\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.274587 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-af38-account-create-rns5f" event={"ID":"b7e1e485-bf18-48d5-bb34-f213b5680994","Type":"ContainerDied","Data":"b051102ce2fa4e8e5458acd72f11b88ad191111d33bd8b34701d2aad928b7760"} Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.274637 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b051102ce2fa4e8e5458acd72f11b88ad191111d33bd8b34701d2aad928b7760" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.274603 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-af38-account-create-rns5f" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.276458 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9e79-account-create-r62zt" event={"ID":"382cc76c-7ba2-45f4-898c-10608b068c36","Type":"ContainerDied","Data":"fc797a11c4405f71619ef96775d735b541a052d19236a39bba29e6c8ed3f6388"} Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.276511 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc797a11c4405f71619ef96775d735b541a052d19236a39bba29e6c8ed3f6388" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.276563 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9e79-account-create-r62zt" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.279532 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fwkxc" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.279713 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fwkxc" event={"ID":"557ae8bd-5ad0-4822-bff1-6274e4523aa0","Type":"ContainerDied","Data":"79ea07f6c3ff57f8eb7fde6b8a82bca08f583f05717fb11ecd9d14099462152c"} Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.279747 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79ea07f6c3ff57f8eb7fde6b8a82bca08f583f05717fb11ecd9d14099462152c" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.282038 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fjmk9" event={"ID":"96d3e000-d092-48ca-bf36-ecbb55cf016b","Type":"ContainerDied","Data":"2956408098b6b8364c9f47196fdd1e1d86c2f8d47417603c7c727f1b8eaa1eed"} Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.282073 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fjmk9" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.282080 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2956408098b6b8364c9f47196fdd1e1d86c2f8d47417603c7c727f1b8eaa1eed" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.602041 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05c6-account-create-dxr7l" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.758542 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpnll\" (UniqueName: \"kubernetes.io/projected/c3159486-5491-4468-b849-04e91c41b248-kube-api-access-vpnll\") pod \"c3159486-5491-4468-b849-04e91c41b248\" (UID: \"c3159486-5491-4468-b849-04e91c41b248\") " Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.758605 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3159486-5491-4468-b849-04e91c41b248-operator-scripts\") pod \"c3159486-5491-4468-b849-04e91c41b248\" (UID: \"c3159486-5491-4468-b849-04e91c41b248\") " Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.759363 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3159486-5491-4468-b849-04e91c41b248-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3159486-5491-4468-b849-04e91c41b248" (UID: "c3159486-5491-4468-b849-04e91c41b248"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.764878 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3159486-5491-4468-b849-04e91c41b248-kube-api-access-vpnll" (OuterVolumeSpecName: "kube-api-access-vpnll") pod "c3159486-5491-4468-b849-04e91c41b248" (UID: "c3159486-5491-4468-b849-04e91c41b248"). InnerVolumeSpecName "kube-api-access-vpnll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.860585 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpnll\" (UniqueName: \"kubernetes.io/projected/c3159486-5491-4468-b849-04e91c41b248-kube-api-access-vpnll\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:08 crc kubenswrapper[4768]: I1124 18:06:08.860630 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3159486-5491-4468-b849-04e91c41b248-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:09 crc kubenswrapper[4768]: I1124 18:06:09.292246 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05c6-account-create-dxr7l" event={"ID":"c3159486-5491-4468-b849-04e91c41b248","Type":"ContainerDied","Data":"f9eb7068f5d2d9b9290c290c188970efa9041e906a347ec61796d0e13c992a33"} Nov 24 18:06:09 crc kubenswrapper[4768]: I1124 18:06:09.292291 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9eb7068f5d2d9b9290c290c188970efa9041e906a347ec61796d0e13c992a33" Nov 24 18:06:09 crc kubenswrapper[4768]: I1124 18:06:09.292337 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05c6-account-create-dxr7l" Nov 24 18:06:10 crc kubenswrapper[4768]: I1124 18:06:10.301335 4768 generic.go:334] "Generic (PLEG): container finished" podID="6d3c858f-af78-4df6-b30a-b7921b5a80f3" containerID="fae19a5ef71d853c1657876c58cf71ca2f4bee33723872d93533aa2608ff41ba" exitCode=0 Nov 24 18:06:10 crc kubenswrapper[4768]: I1124 18:06:10.301546 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-q6fzj" event={"ID":"6d3c858f-af78-4df6-b30a-b7921b5a80f3","Type":"ContainerDied","Data":"fae19a5ef71d853c1657876c58cf71ca2f4bee33723872d93533aa2608ff41ba"} Nov 24 18:06:10 crc kubenswrapper[4768]: I1124 18:06:10.885330 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rknvx" Nov 24 18:06:10 crc kubenswrapper[4768]: I1124 18:06:10.995858 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0272e837-2dbf-4eca-bbf5-c33af7822bd2-operator-scripts\") pod \"0272e837-2dbf-4eca-bbf5-c33af7822bd2\" (UID: \"0272e837-2dbf-4eca-bbf5-c33af7822bd2\") " Nov 24 18:06:10 crc kubenswrapper[4768]: I1124 18:06:10.996031 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xffg\" (UniqueName: \"kubernetes.io/projected/0272e837-2dbf-4eca-bbf5-c33af7822bd2-kube-api-access-5xffg\") pod \"0272e837-2dbf-4eca-bbf5-c33af7822bd2\" (UID: \"0272e837-2dbf-4eca-bbf5-c33af7822bd2\") " Nov 24 18:06:10 crc kubenswrapper[4768]: I1124 18:06:10.997203 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0272e837-2dbf-4eca-bbf5-c33af7822bd2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0272e837-2dbf-4eca-bbf5-c33af7822bd2" (UID: "0272e837-2dbf-4eca-bbf5-c33af7822bd2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.001475 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0272e837-2dbf-4eca-bbf5-c33af7822bd2-kube-api-access-5xffg" (OuterVolumeSpecName: "kube-api-access-5xffg") pod "0272e837-2dbf-4eca-bbf5-c33af7822bd2" (UID: "0272e837-2dbf-4eca-bbf5-c33af7822bd2"). InnerVolumeSpecName "kube-api-access-5xffg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.098684 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xffg\" (UniqueName: \"kubernetes.io/projected/0272e837-2dbf-4eca-bbf5-c33af7822bd2-kube-api-access-5xffg\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.098717 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0272e837-2dbf-4eca-bbf5-c33af7822bd2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.315329 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rknvx" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.315337 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rknvx" event={"ID":"0272e837-2dbf-4eca-bbf5-c33af7822bd2","Type":"ContainerDied","Data":"2edf6e2f0a2edb82f9ee0446581fd5a033051afb80d4625bdc079f4c9d4ad5a1"} Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.316763 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2edf6e2f0a2edb82f9ee0446581fd5a033051afb80d4625bdc079f4c9d4ad5a1" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.320103 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qqlpp" event={"ID":"65e445a9-a207-41eb-816d-de70c981c8c2","Type":"ContainerStarted","Data":"d384579148efc59c12f00c3a32f6aa5e5cdad4836a558c8cb959bf635f7fd590"} Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.365139 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qqlpp" podStartSLOduration=2.152180919 podStartE2EDuration="7.365113137s" podCreationTimestamp="2025-11-24 18:06:04 +0000 UTC" firstStartedPulling="2025-11-24 18:06:05.661645098 +0000 UTC m=+1004.522226875" lastFinishedPulling="2025-11-24 18:06:10.874577296 +0000 UTC m=+1009.735159093" observedRunningTime="2025-11-24 18:06:11.353163423 +0000 UTC m=+1010.213745220" watchObservedRunningTime="2025-11-24 18:06:11.365113137 +0000 UTC m=+1010.225694924" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.644003 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-q6fzj" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.811614 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-combined-ca-bundle\") pod \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.811690 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvk6r\" (UniqueName: \"kubernetes.io/projected/6d3c858f-af78-4df6-b30a-b7921b5a80f3-kube-api-access-fvk6r\") pod \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.811791 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-config-data\") pod \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.811811 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-db-sync-config-data\") pod \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\" (UID: \"6d3c858f-af78-4df6-b30a-b7921b5a80f3\") " Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.817061 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3c858f-af78-4df6-b30a-b7921b5a80f3-kube-api-access-fvk6r" (OuterVolumeSpecName: "kube-api-access-fvk6r") pod "6d3c858f-af78-4df6-b30a-b7921b5a80f3" (UID: "6d3c858f-af78-4df6-b30a-b7921b5a80f3"). InnerVolumeSpecName "kube-api-access-fvk6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.818260 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6d3c858f-af78-4df6-b30a-b7921b5a80f3" (UID: "6d3c858f-af78-4df6-b30a-b7921b5a80f3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.834995 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d3c858f-af78-4df6-b30a-b7921b5a80f3" (UID: "6d3c858f-af78-4df6-b30a-b7921b5a80f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.857682 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-config-data" (OuterVolumeSpecName: "config-data") pod "6d3c858f-af78-4df6-b30a-b7921b5a80f3" (UID: "6d3c858f-af78-4df6-b30a-b7921b5a80f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.914658 4768 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.914964 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.915096 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvk6r\" (UniqueName: \"kubernetes.io/projected/6d3c858f-af78-4df6-b30a-b7921b5a80f3-kube-api-access-fvk6r\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:11 crc kubenswrapper[4768]: I1124 18:06:11.915353 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3c858f-af78-4df6-b30a-b7921b5a80f3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.329278 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-q6fzj" event={"ID":"6d3c858f-af78-4df6-b30a-b7921b5a80f3","Type":"ContainerDied","Data":"a9addc227f200ded443770784d2427eb57fdfdd970bb38673a60d8f3cffeefda"} Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.329351 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9addc227f200ded443770784d2427eb57fdfdd970bb38673a60d8f3cffeefda" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.329288 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-q6fzj" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.606622 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-fs28g"] Nov 24 18:06:12 crc kubenswrapper[4768]: E1124 18:06:12.606941 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3c858f-af78-4df6-b30a-b7921b5a80f3" containerName="glance-db-sync" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.606960 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3c858f-af78-4df6-b30a-b7921b5a80f3" containerName="glance-db-sync" Nov 24 18:06:12 crc kubenswrapper[4768]: E1124 18:06:12.606973 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0272e837-2dbf-4eca-bbf5-c33af7822bd2" containerName="mariadb-database-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.606980 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0272e837-2dbf-4eca-bbf5-c33af7822bd2" containerName="mariadb-database-create" Nov 24 18:06:12 crc kubenswrapper[4768]: E1124 18:06:12.606990 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e1e485-bf18-48d5-bb34-f213b5680994" containerName="mariadb-account-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.606996 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e1e485-bf18-48d5-bb34-f213b5680994" containerName="mariadb-account-create" Nov 24 18:06:12 crc kubenswrapper[4768]: E1124 18:06:12.607003 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3159486-5491-4468-b849-04e91c41b248" containerName="mariadb-account-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.607009 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3159486-5491-4468-b849-04e91c41b248" containerName="mariadb-account-create" Nov 24 18:06:12 crc kubenswrapper[4768]: E1124 18:06:12.607022 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557ae8bd-5ad0-4822-bff1-6274e4523aa0" containerName="mariadb-database-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.607027 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="557ae8bd-5ad0-4822-bff1-6274e4523aa0" containerName="mariadb-database-create" Nov 24 18:06:12 crc kubenswrapper[4768]: E1124 18:06:12.607042 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382cc76c-7ba2-45f4-898c-10608b068c36" containerName="mariadb-account-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.607048 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="382cc76c-7ba2-45f4-898c-10608b068c36" containerName="mariadb-account-create" Nov 24 18:06:12 crc kubenswrapper[4768]: E1124 18:06:12.607065 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d3e000-d092-48ca-bf36-ecbb55cf016b" containerName="mariadb-database-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.607071 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d3e000-d092-48ca-bf36-ecbb55cf016b" containerName="mariadb-database-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.607224 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="96d3e000-d092-48ca-bf36-ecbb55cf016b" containerName="mariadb-database-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.607239 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0272e837-2dbf-4eca-bbf5-c33af7822bd2" containerName="mariadb-database-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.607252 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3c858f-af78-4df6-b30a-b7921b5a80f3" containerName="glance-db-sync" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.607261 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e1e485-bf18-48d5-bb34-f213b5680994" containerName="mariadb-account-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.607270 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3159486-5491-4468-b849-04e91c41b248" containerName="mariadb-account-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.607276 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="382cc76c-7ba2-45f4-898c-10608b068c36" containerName="mariadb-account-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.607286 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="557ae8bd-5ad0-4822-bff1-6274e4523aa0" containerName="mariadb-database-create" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.608127 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.630955 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-fs28g"] Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.727754 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-config\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.727917 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvhm7\" (UniqueName: \"kubernetes.io/projected/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-kube-api-access-vvhm7\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.727982 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.728017 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.728073 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.829336 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.829652 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.829753 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.829902 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-config\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.830315 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.830400 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.830554 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.830662 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-config\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.831287 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvhm7\" (UniqueName: \"kubernetes.io/projected/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-kube-api-access-vvhm7\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.851408 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvhm7\" (UniqueName: \"kubernetes.io/projected/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-kube-api-access-vvhm7\") pod \"dnsmasq-dns-54f9b7b8d9-fs28g\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:12 crc kubenswrapper[4768]: I1124 18:06:12.928324 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:13 crc kubenswrapper[4768]: I1124 18:06:13.344031 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-fs28g"] Nov 24 18:06:13 crc kubenswrapper[4768]: W1124 18:06:13.350164 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16aa8a6c_1cbb_444e_ae9a_f3bb5b74ff5b.slice/crio-77b5573bf545d2dea35ce3f8842954b93fb492e4c71c62a39d4cf592293ef988 WatchSource:0}: Error finding container 77b5573bf545d2dea35ce3f8842954b93fb492e4c71c62a39d4cf592293ef988: Status 404 returned error can't find the container with id 77b5573bf545d2dea35ce3f8842954b93fb492e4c71c62a39d4cf592293ef988 Nov 24 18:06:14 crc kubenswrapper[4768]: I1124 18:06:14.344895 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" event={"ID":"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b","Type":"ContainerStarted","Data":"ed25c1c7bceef5abad24f5ef607222c98170c4ef80282caa95f9b8d57cfb274f"} Nov 24 18:06:14 crc kubenswrapper[4768]: I1124 18:06:14.345239 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" event={"ID":"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b","Type":"ContainerStarted","Data":"77b5573bf545d2dea35ce3f8842954b93fb492e4c71c62a39d4cf592293ef988"} Nov 24 18:06:15 crc kubenswrapper[4768]: I1124 18:06:15.354622 4768 generic.go:334] "Generic (PLEG): container finished" podID="16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" containerID="ed25c1c7bceef5abad24f5ef607222c98170c4ef80282caa95f9b8d57cfb274f" exitCode=0 Nov 24 18:06:15 crc kubenswrapper[4768]: I1124 18:06:15.354750 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" event={"ID":"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b","Type":"ContainerDied","Data":"ed25c1c7bceef5abad24f5ef607222c98170c4ef80282caa95f9b8d57cfb274f"} Nov 24 18:06:16 crc kubenswrapper[4768]: E1124 18:06:16.591059 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65e445a9_a207_41eb_816d_de70c981c8c2.slice/crio-conmon-d384579148efc59c12f00c3a32f6aa5e5cdad4836a558c8cb959bf635f7fd590.scope\": RecentStats: unable to find data in memory cache]" Nov 24 18:06:17 crc kubenswrapper[4768]: I1124 18:06:17.379796 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" event={"ID":"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b","Type":"ContainerStarted","Data":"1f5b9fc042127d6f8b7d0150d1b4ad4a1855a4359657f22bedf8d6dc3b960114"} Nov 24 18:06:17 crc kubenswrapper[4768]: I1124 18:06:17.380342 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:17 crc kubenswrapper[4768]: I1124 18:06:17.383442 4768 generic.go:334] "Generic (PLEG): container finished" podID="65e445a9-a207-41eb-816d-de70c981c8c2" containerID="d384579148efc59c12f00c3a32f6aa5e5cdad4836a558c8cb959bf635f7fd590" exitCode=0 Nov 24 18:06:17 crc kubenswrapper[4768]: I1124 18:06:17.383507 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qqlpp" event={"ID":"65e445a9-a207-41eb-816d-de70c981c8c2","Type":"ContainerDied","Data":"d384579148efc59c12f00c3a32f6aa5e5cdad4836a558c8cb959bf635f7fd590"} Nov 24 18:06:17 crc kubenswrapper[4768]: I1124 18:06:17.412928 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" podStartSLOduration=5.412861114 podStartE2EDuration="5.412861114s" podCreationTimestamp="2025-11-24 18:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:17.398966692 +0000 UTC m=+1016.259548509" watchObservedRunningTime="2025-11-24 18:06:17.412861114 +0000 UTC m=+1016.273442941" Nov 24 18:06:18 crc kubenswrapper[4768]: I1124 18:06:18.739011 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:18 crc kubenswrapper[4768]: I1124 18:06:18.850823 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e445a9-a207-41eb-816d-de70c981c8c2-config-data\") pod \"65e445a9-a207-41eb-816d-de70c981c8c2\" (UID: \"65e445a9-a207-41eb-816d-de70c981c8c2\") " Nov 24 18:06:18 crc kubenswrapper[4768]: I1124 18:06:18.851398 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzfd4\" (UniqueName: \"kubernetes.io/projected/65e445a9-a207-41eb-816d-de70c981c8c2-kube-api-access-jzfd4\") pod \"65e445a9-a207-41eb-816d-de70c981c8c2\" (UID: \"65e445a9-a207-41eb-816d-de70c981c8c2\") " Nov 24 18:06:18 crc kubenswrapper[4768]: I1124 18:06:18.851684 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e445a9-a207-41eb-816d-de70c981c8c2-combined-ca-bundle\") pod \"65e445a9-a207-41eb-816d-de70c981c8c2\" (UID: \"65e445a9-a207-41eb-816d-de70c981c8c2\") " Nov 24 18:06:18 crc kubenswrapper[4768]: I1124 18:06:18.860920 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e445a9-a207-41eb-816d-de70c981c8c2-kube-api-access-jzfd4" (OuterVolumeSpecName: "kube-api-access-jzfd4") pod "65e445a9-a207-41eb-816d-de70c981c8c2" (UID: "65e445a9-a207-41eb-816d-de70c981c8c2"). InnerVolumeSpecName "kube-api-access-jzfd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:18 crc kubenswrapper[4768]: I1124 18:06:18.902727 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65e445a9-a207-41eb-816d-de70c981c8c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65e445a9-a207-41eb-816d-de70c981c8c2" (UID: "65e445a9-a207-41eb-816d-de70c981c8c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:18 crc kubenswrapper[4768]: I1124 18:06:18.931216 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65e445a9-a207-41eb-816d-de70c981c8c2-config-data" (OuterVolumeSpecName: "config-data") pod "65e445a9-a207-41eb-816d-de70c981c8c2" (UID: "65e445a9-a207-41eb-816d-de70c981c8c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:18 crc kubenswrapper[4768]: I1124 18:06:18.954659 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e445a9-a207-41eb-816d-de70c981c8c2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:18 crc kubenswrapper[4768]: I1124 18:06:18.954696 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzfd4\" (UniqueName: \"kubernetes.io/projected/65e445a9-a207-41eb-816d-de70c981c8c2-kube-api-access-jzfd4\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:18 crc kubenswrapper[4768]: I1124 18:06:18.954713 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e445a9-a207-41eb-816d-de70c981c8c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.405873 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qqlpp" event={"ID":"65e445a9-a207-41eb-816d-de70c981c8c2","Type":"ContainerDied","Data":"841f088fd8be41c2945b867a3724fdb9c06192240f7da73e7b514ea06de18065"} Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.406458 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="841f088fd8be41c2945b867a3724fdb9c06192240f7da73e7b514ea06de18065" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.405948 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qqlpp" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.929186 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-fs28g"] Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.929448 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" podUID="16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" containerName="dnsmasq-dns" containerID="cri-o://1f5b9fc042127d6f8b7d0150d1b4ad4a1855a4359657f22bedf8d6dc3b960114" gracePeriod=10 Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.954729 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hlk8x"] Nov 24 18:06:19 crc kubenswrapper[4768]: E1124 18:06:19.955106 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e445a9-a207-41eb-816d-de70c981c8c2" containerName="keystone-db-sync" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.955120 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e445a9-a207-41eb-816d-de70c981c8c2" containerName="keystone-db-sync" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.955303 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e445a9-a207-41eb-816d-de70c981c8c2" containerName="keystone-db-sync" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.955839 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.966673 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.967136 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.967285 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.967534 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.972442 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gql6l" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.986884 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-b6g7r"] Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.988627 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.990135 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-scripts\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.990170 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-credential-keys\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.990216 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-config-data\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.990255 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xq9t\" (UniqueName: \"kubernetes.io/projected/1e92475a-7cc4-4533-88eb-38f941a8b74e-kube-api-access-4xq9t\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.990290 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-combined-ca-bundle\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.990313 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-fernet-keys\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:19 crc kubenswrapper[4768]: I1124 18:06:19.994244 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hlk8x"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.009577 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-b6g7r"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.091354 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-config-data\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.091418 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bm7d\" (UniqueName: \"kubernetes.io/projected/3883fa6f-ef18-47ad-9380-33d0c61dba66-kube-api-access-6bm7d\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.091452 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xq9t\" (UniqueName: \"kubernetes.io/projected/1e92475a-7cc4-4533-88eb-38f941a8b74e-kube-api-access-4xq9t\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.091508 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-combined-ca-bundle\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.091535 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-fernet-keys\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.091587 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-dns-svc\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.091654 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-config\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.091681 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.094160 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.094819 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-scripts\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.094909 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-credential-keys\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.107093 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-combined-ca-bundle\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.108458 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-config-data\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.108792 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-scripts\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.110721 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-fernet-keys\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.120810 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-credential-keys\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.126161 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xq9t\" (UniqueName: \"kubernetes.io/projected/1e92475a-7cc4-4533-88eb-38f941a8b74e-kube-api-access-4xq9t\") pod \"keystone-bootstrap-hlk8x\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.180180 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wpggd"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.182017 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.186985 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.187358 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.187477 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nrcgw" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.195639 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bm7d\" (UniqueName: \"kubernetes.io/projected/3883fa6f-ef18-47ad-9380-33d0c61dba66-kube-api-access-6bm7d\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.195691 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-scripts\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.195734 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-config-data\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.195768 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-dns-svc\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.195810 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-db-sync-config-data\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.195828 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed13008-e82b-40d6-af72-abfb5a1223fb-etc-machine-id\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.195850 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-combined-ca-bundle\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.195879 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-config\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.195894 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.195919 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdfn\" (UniqueName: \"kubernetes.io/projected/8ed13008-e82b-40d6-af72-abfb5a1223fb-kube-api-access-xtdfn\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.195942 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.198241 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-dns-svc\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.198803 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.199247 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-55cpw"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.199957 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.200327 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.200713 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-config\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.204184 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.204459 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-tnchn" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.204523 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.227332 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wpggd"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.246573 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bm7d\" (UniqueName: \"kubernetes.io/projected/3883fa6f-ef18-47ad-9380-33d0c61dba66-kube-api-access-6bm7d\") pod \"dnsmasq-dns-6546db6db7-b6g7r\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.250105 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-55cpw"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.298369 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-scripts\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.300624 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-config-data\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.300806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-db-sync-config-data\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.300826 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed13008-e82b-40d6-af72-abfb5a1223fb-etc-machine-id\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.300856 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-combined-ca-bundle\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.300894 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtdfn\" (UniqueName: \"kubernetes.io/projected/8ed13008-e82b-40d6-af72-abfb5a1223fb-kube-api-access-xtdfn\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.305419 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed13008-e82b-40d6-af72-abfb5a1223fb-etc-machine-id\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.321461 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-config-data\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.321566 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xs4kv"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.321622 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-db-sync-config-data\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.323679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-combined-ca-bundle\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.324340 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.325508 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-scripts\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.327809 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.328002 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-grx7k" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.332356 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xs4kv"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.337184 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtdfn\" (UniqueName: \"kubernetes.io/projected/8ed13008-e82b-40d6-af72-abfb5a1223fb-kube-api-access-xtdfn\") pod \"cinder-db-sync-wpggd\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.361694 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-b6g7r"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.362322 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.391247 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.395078 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.398009 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.400181 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.402598 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dfd4bc52-bb80-45a4-8666-28e28e129c9e-config\") pod \"neutron-db-sync-55cpw\" (UID: \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\") " pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.402639 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd4bc52-bb80-45a4-8666-28e28e129c9e-combined-ca-bundle\") pod \"neutron-db-sync-55cpw\" (UID: \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\") " pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.402678 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8pf8\" (UniqueName: \"kubernetes.io/projected/dfd4bc52-bb80-45a4-8666-28e28e129c9e-kube-api-access-x8pf8\") pod \"neutron-db-sync-55cpw\" (UID: \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\") " pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.407928 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.435650 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.436124 4768 generic.go:334] "Generic (PLEG): container finished" podID="16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" containerID="1f5b9fc042127d6f8b7d0150d1b4ad4a1855a4359657f22bedf8d6dc3b960114" exitCode=0 Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.436159 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" event={"ID":"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b","Type":"ContainerDied","Data":"1f5b9fc042127d6f8b7d0150d1b4ad4a1855a4359657f22bedf8d6dc3b960114"} Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.453107 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rgvsd"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.457249 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.462026 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.462336 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.464588 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9hmhx" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.472777 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vm92p"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.474450 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.498913 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rgvsd"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.505591 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.505636 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd4bc52-bb80-45a4-8666-28e28e129c9e-combined-ca-bundle\") pod \"neutron-db-sync-55cpw\" (UID: \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\") " pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.505666 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hrts\" (UniqueName: \"kubernetes.io/projected/093bb01a-1d6c-43cb-a0f0-7868857e241a-kube-api-access-6hrts\") pod \"barbican-db-sync-xs4kv\" (UID: \"093bb01a-1d6c-43cb-a0f0-7868857e241a\") " pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.505712 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-config-data\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.505733 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8pf8\" (UniqueName: \"kubernetes.io/projected/dfd4bc52-bb80-45a4-8666-28e28e129c9e-kube-api-access-x8pf8\") pod \"neutron-db-sync-55cpw\" (UID: \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\") " pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.505779 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0b8cf78-9bbe-44cd-8907-78fd9548d712-log-httpd\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.505807 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093bb01a-1d6c-43cb-a0f0-7868857e241a-combined-ca-bundle\") pod \"barbican-db-sync-xs4kv\" (UID: \"093bb01a-1d6c-43cb-a0f0-7868857e241a\") " pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.505845 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mdvg\" (UniqueName: \"kubernetes.io/projected/d0b8cf78-9bbe-44cd-8907-78fd9548d712-kube-api-access-9mdvg\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.505892 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/093bb01a-1d6c-43cb-a0f0-7868857e241a-db-sync-config-data\") pod \"barbican-db-sync-xs4kv\" (UID: \"093bb01a-1d6c-43cb-a0f0-7868857e241a\") " pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.505914 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.505935 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0b8cf78-9bbe-44cd-8907-78fd9548d712-run-httpd\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.505955 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-scripts\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.506010 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dfd4bc52-bb80-45a4-8666-28e28e129c9e-config\") pod \"neutron-db-sync-55cpw\" (UID: \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\") " pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.516289 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd4bc52-bb80-45a4-8666-28e28e129c9e-combined-ca-bundle\") pod \"neutron-db-sync-55cpw\" (UID: \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\") " pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.516902 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dfd4bc52-bb80-45a4-8666-28e28e129c9e-config\") pod \"neutron-db-sync-55cpw\" (UID: \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\") " pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.519048 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vm92p"] Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.531150 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wpggd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.531576 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8pf8\" (UniqueName: \"kubernetes.io/projected/dfd4bc52-bb80-45a4-8666-28e28e129c9e-kube-api-access-x8pf8\") pod \"neutron-db-sync-55cpw\" (UID: \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\") " pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.593969 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.605599 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.607656 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.607695 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hrts\" (UniqueName: \"kubernetes.io/projected/093bb01a-1d6c-43cb-a0f0-7868857e241a-kube-api-access-6hrts\") pod \"barbican-db-sync-xs4kv\" (UID: \"093bb01a-1d6c-43cb-a0f0-7868857e241a\") " pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.607742 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738a244f-751e-4d50-8ba2-6a9d122b9a69-logs\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.607761 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crw6l\" (UniqueName: \"kubernetes.io/projected/738a244f-751e-4d50-8ba2-6a9d122b9a69-kube-api-access-crw6l\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.607805 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-config-data\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.607935 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.607959 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-combined-ca-bundle\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.607981 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.608009 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-scripts\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.608028 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0b8cf78-9bbe-44cd-8907-78fd9548d712-log-httpd\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.608054 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093bb01a-1d6c-43cb-a0f0-7868857e241a-combined-ca-bundle\") pod \"barbican-db-sync-xs4kv\" (UID: \"093bb01a-1d6c-43cb-a0f0-7868857e241a\") " pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.608075 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.608108 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mdvg\" (UniqueName: \"kubernetes.io/projected/d0b8cf78-9bbe-44cd-8907-78fd9548d712-kube-api-access-9mdvg\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.608128 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/093bb01a-1d6c-43cb-a0f0-7868857e241a-db-sync-config-data\") pod \"barbican-db-sync-xs4kv\" (UID: \"093bb01a-1d6c-43cb-a0f0-7868857e241a\") " pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.608146 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.608164 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-config-data\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.608189 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0b8cf78-9bbe-44cd-8907-78fd9548d712-run-httpd\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.608215 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-scripts\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.608258 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfxsn\" (UniqueName: \"kubernetes.io/projected/175cddfa-c51a-40dc-be36-97af7e8b7cc2-kube-api-access-hfxsn\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.608276 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-config\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.609550 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0b8cf78-9bbe-44cd-8907-78fd9548d712-log-httpd\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.611037 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0b8cf78-9bbe-44cd-8907-78fd9548d712-run-httpd\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.614752 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093bb01a-1d6c-43cb-a0f0-7868857e241a-combined-ca-bundle\") pod \"barbican-db-sync-xs4kv\" (UID: \"093bb01a-1d6c-43cb-a0f0-7868857e241a\") " pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.615314 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.616986 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.618430 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/093bb01a-1d6c-43cb-a0f0-7868857e241a-db-sync-config-data\") pod \"barbican-db-sync-xs4kv\" (UID: \"093bb01a-1d6c-43cb-a0f0-7868857e241a\") " pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.619866 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-config-data\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.623339 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-scripts\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.630479 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mdvg\" (UniqueName: \"kubernetes.io/projected/d0b8cf78-9bbe-44cd-8907-78fd9548d712-kube-api-access-9mdvg\") pod \"ceilometer-0\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.640891 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hrts\" (UniqueName: \"kubernetes.io/projected/093bb01a-1d6c-43cb-a0f0-7868857e241a-kube-api-access-6hrts\") pod \"barbican-db-sync-xs4kv\" (UID: \"093bb01a-1d6c-43cb-a0f0-7868857e241a\") " pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.682875 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.710670 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-dns-svc\") pod \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.710762 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-ovsdbserver-nb\") pod \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.710849 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-config\") pod \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.710941 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-ovsdbserver-sb\") pod \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.710962 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvhm7\" (UniqueName: \"kubernetes.io/projected/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-kube-api-access-vvhm7\") pod \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\" (UID: \"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b\") " Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.711183 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-config-data\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.711236 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfxsn\" (UniqueName: \"kubernetes.io/projected/175cddfa-c51a-40dc-be36-97af7e8b7cc2-kube-api-access-hfxsn\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.711252 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-config\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.711298 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738a244f-751e-4d50-8ba2-6a9d122b9a69-logs\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.711315 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crw6l\" (UniqueName: \"kubernetes.io/projected/738a244f-751e-4d50-8ba2-6a9d122b9a69-kube-api-access-crw6l\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.711343 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.711368 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-combined-ca-bundle\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.711390 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.711415 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-scripts\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.711442 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.713529 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738a244f-751e-4d50-8ba2-6a9d122b9a69-logs\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.732367 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.732546 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.736590 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-config\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.736820 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.737349 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.775725 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-combined-ca-bundle\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.776546 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-kube-api-access-vvhm7" (OuterVolumeSpecName: "kube-api-access-vvhm7") pod "16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" (UID: "16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b"). InnerVolumeSpecName "kube-api-access-vvhm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.776623 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-scripts\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.777568 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfxsn\" (UniqueName: \"kubernetes.io/projected/175cddfa-c51a-40dc-be36-97af7e8b7cc2-kube-api-access-hfxsn\") pod \"dnsmasq-dns-7987f74bbc-vm92p\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.777774 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-config-data\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.778177 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crw6l\" (UniqueName: \"kubernetes.io/projected/738a244f-751e-4d50-8ba2-6a9d122b9a69-kube-api-access-crw6l\") pod \"placement-db-sync-rgvsd\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.783864 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.812827 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvhm7\" (UniqueName: \"kubernetes.io/projected/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-kube-api-access-vvhm7\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.818726 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.822136 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" (UID: "16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.829415 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" (UID: "16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.843318 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-config" (OuterVolumeSpecName: "config") pod "16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" (UID: "16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.860369 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" (UID: "16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.916420 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.916458 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.916467 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.916476 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:20 crc kubenswrapper[4768]: I1124 18:06:20.937455 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-b6g7r"] Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.006099 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hlk8x"] Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.139248 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-55cpw"] Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.155005 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wpggd"] Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.281763 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xs4kv"] Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.391999 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rgvsd"] Nov 24 18:06:21 crc kubenswrapper[4768]: W1124 18:06:21.405033 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod738a244f_751e_4d50_8ba2_6a9d122b9a69.slice/crio-7295704fa0b0d1b3f5019c7504a2beb3fa4136ca87b047323210e08930ee497b WatchSource:0}: Error finding container 7295704fa0b0d1b3f5019c7504a2beb3fa4136ca87b047323210e08930ee497b: Status 404 returned error can't find the container with id 7295704fa0b0d1b3f5019c7504a2beb3fa4136ca87b047323210e08930ee497b Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.411527 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:06:21 crc kubenswrapper[4768]: W1124 18:06:21.431338 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0b8cf78_9bbe_44cd_8907_78fd9548d712.slice/crio-d9f1c6a89d928104d6d03b36d146df7bef337e950d08421889dc753dbeef4178 WatchSource:0}: Error finding container d9f1c6a89d928104d6d03b36d146df7bef337e950d08421889dc753dbeef4178: Status 404 returned error can't find the container with id d9f1c6a89d928104d6d03b36d146df7bef337e950d08421889dc753dbeef4178 Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.449384 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0b8cf78-9bbe-44cd-8907-78fd9548d712","Type":"ContainerStarted","Data":"d9f1c6a89d928104d6d03b36d146df7bef337e950d08421889dc753dbeef4178"} Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.453414 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" event={"ID":"3883fa6f-ef18-47ad-9380-33d0c61dba66","Type":"ContainerStarted","Data":"c887186a0d0d5de80f839e7389233f25e3c8c040617fa43b9b7955eaa0fe152c"} Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.453635 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" podUID="3883fa6f-ef18-47ad-9380-33d0c61dba66" containerName="init" containerID="cri-o://771146c610d5d69262c172f460b0f38a444899f89ec60704281b1b90c0ad5512" gracePeriod=10 Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.454470 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rgvsd" event={"ID":"738a244f-751e-4d50-8ba2-6a9d122b9a69","Type":"ContainerStarted","Data":"7295704fa0b0d1b3f5019c7504a2beb3fa4136ca87b047323210e08930ee497b"} Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.456347 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xs4kv" event={"ID":"093bb01a-1d6c-43cb-a0f0-7868857e241a","Type":"ContainerStarted","Data":"b1c2d031e71a4854b760e603c9db183ab10a77fb02218425fc50d64186fcf111"} Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.458340 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wpggd" event={"ID":"8ed13008-e82b-40d6-af72-abfb5a1223fb","Type":"ContainerStarted","Data":"58ed14d30adcb38bc39b50e9f93da38a0c6c78603ec940812c9cb0d76b286332"} Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.460898 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" event={"ID":"16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b","Type":"ContainerDied","Data":"77b5573bf545d2dea35ce3f8842954b93fb492e4c71c62a39d4cf592293ef988"} Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.460944 4768 scope.go:117] "RemoveContainer" containerID="1f5b9fc042127d6f8b7d0150d1b4ad4a1855a4359657f22bedf8d6dc3b960114" Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.460951 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-fs28g" Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.462981 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlk8x" event={"ID":"1e92475a-7cc4-4533-88eb-38f941a8b74e","Type":"ContainerStarted","Data":"f04ebf8f996a285adf0cd06a8f46a7aa50b6ab6900e8cc8628dbb650c28d9869"} Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.463026 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlk8x" event={"ID":"1e92475a-7cc4-4533-88eb-38f941a8b74e","Type":"ContainerStarted","Data":"bffa8811529c7d7e54a984c1b90e0c8f087655fd77302d59e3bf7165385e9818"} Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.468393 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-55cpw" event={"ID":"dfd4bc52-bb80-45a4-8666-28e28e129c9e","Type":"ContainerStarted","Data":"a48a584276e1be535d7f4be9a5516457657724a67c9945cc82c71fbe13a7e8df"} Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.468436 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-55cpw" event={"ID":"dfd4bc52-bb80-45a4-8666-28e28e129c9e","Type":"ContainerStarted","Data":"914ed618e374689bae60592fc626a36f92b40bce12ede78adeeed6dd877fd001"} Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.489428 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hlk8x" podStartSLOduration=2.489405573 podStartE2EDuration="2.489405573s" podCreationTimestamp="2025-11-24 18:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:21.482737298 +0000 UTC m=+1020.343319075" watchObservedRunningTime="2025-11-24 18:06:21.489405573 +0000 UTC m=+1020.349987350" Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.493170 4768 scope.go:117] "RemoveContainer" containerID="ed25c1c7bceef5abad24f5ef607222c98170c4ef80282caa95f9b8d57cfb274f" Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.515677 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-55cpw" podStartSLOduration=1.515644907 podStartE2EDuration="1.515644907s" podCreationTimestamp="2025-11-24 18:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:21.510740986 +0000 UTC m=+1020.371322763" watchObservedRunningTime="2025-11-24 18:06:21.515644907 +0000 UTC m=+1020.376226684" Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.530968 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vm92p"] Nov 24 18:06:21 crc kubenswrapper[4768]: W1124 18:06:21.539201 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod175cddfa_c51a_40dc_be36_97af7e8b7cc2.slice/crio-e060cc0d270527edb5d637afacb3a12d6ae236a90a554cfb21d8fd5d7914dba8 WatchSource:0}: Error finding container e060cc0d270527edb5d637afacb3a12d6ae236a90a554cfb21d8fd5d7914dba8: Status 404 returned error can't find the container with id e060cc0d270527edb5d637afacb3a12d6ae236a90a554cfb21d8fd5d7914dba8 Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.543270 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-fs28g"] Nov 24 18:06:21 crc kubenswrapper[4768]: I1124 18:06:21.551868 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-fs28g"] Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:21.913470 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" path="/var/lib/kubelet/pods/16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b/volumes" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:21.923254 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.038921 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-ovsdbserver-nb\") pod \"3883fa6f-ef18-47ad-9380-33d0c61dba66\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.039276 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bm7d\" (UniqueName: \"kubernetes.io/projected/3883fa6f-ef18-47ad-9380-33d0c61dba66-kube-api-access-6bm7d\") pod \"3883fa6f-ef18-47ad-9380-33d0c61dba66\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.039331 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-ovsdbserver-sb\") pod \"3883fa6f-ef18-47ad-9380-33d0c61dba66\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.039418 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-dns-svc\") pod \"3883fa6f-ef18-47ad-9380-33d0c61dba66\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.039538 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-config\") pod \"3883fa6f-ef18-47ad-9380-33d0c61dba66\" (UID: \"3883fa6f-ef18-47ad-9380-33d0c61dba66\") " Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.053642 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3883fa6f-ef18-47ad-9380-33d0c61dba66-kube-api-access-6bm7d" (OuterVolumeSpecName: "kube-api-access-6bm7d") pod "3883fa6f-ef18-47ad-9380-33d0c61dba66" (UID: "3883fa6f-ef18-47ad-9380-33d0c61dba66"). InnerVolumeSpecName "kube-api-access-6bm7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.098662 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3883fa6f-ef18-47ad-9380-33d0c61dba66" (UID: "3883fa6f-ef18-47ad-9380-33d0c61dba66"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.109708 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3883fa6f-ef18-47ad-9380-33d0c61dba66" (UID: "3883fa6f-ef18-47ad-9380-33d0c61dba66"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.134099 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3883fa6f-ef18-47ad-9380-33d0c61dba66" (UID: "3883fa6f-ef18-47ad-9380-33d0c61dba66"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.152154 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.152196 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bm7d\" (UniqueName: \"kubernetes.io/projected/3883fa6f-ef18-47ad-9380-33d0c61dba66-kube-api-access-6bm7d\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.152209 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.152220 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.171195 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-config" (OuterVolumeSpecName: "config") pod "3883fa6f-ef18-47ad-9380-33d0c61dba66" (UID: "3883fa6f-ef18-47ad-9380-33d0c61dba66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.196706 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.259685 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3883fa6f-ef18-47ad-9380-33d0c61dba66-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.480217 4768 generic.go:334] "Generic (PLEG): container finished" podID="3883fa6f-ef18-47ad-9380-33d0c61dba66" containerID="771146c610d5d69262c172f460b0f38a444899f89ec60704281b1b90c0ad5512" exitCode=0 Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.480285 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.480333 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" event={"ID":"3883fa6f-ef18-47ad-9380-33d0c61dba66","Type":"ContainerDied","Data":"771146c610d5d69262c172f460b0f38a444899f89ec60704281b1b90c0ad5512"} Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.480401 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-b6g7r" event={"ID":"3883fa6f-ef18-47ad-9380-33d0c61dba66","Type":"ContainerDied","Data":"c887186a0d0d5de80f839e7389233f25e3c8c040617fa43b9b7955eaa0fe152c"} Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.480422 4768 scope.go:117] "RemoveContainer" containerID="771146c610d5d69262c172f460b0f38a444899f89ec60704281b1b90c0ad5512" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.485260 4768 generic.go:334] "Generic (PLEG): container finished" podID="175cddfa-c51a-40dc-be36-97af7e8b7cc2" containerID="78946e99e25b3b2e600aad0bdb4090af4ee0b5f3d3c69f63cd5cc8d6a0ec8c42" exitCode=0 Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.485304 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" event={"ID":"175cddfa-c51a-40dc-be36-97af7e8b7cc2","Type":"ContainerDied","Data":"78946e99e25b3b2e600aad0bdb4090af4ee0b5f3d3c69f63cd5cc8d6a0ec8c42"} Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.485323 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" event={"ID":"175cddfa-c51a-40dc-be36-97af7e8b7cc2","Type":"ContainerStarted","Data":"e060cc0d270527edb5d637afacb3a12d6ae236a90a554cfb21d8fd5d7914dba8"} Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.632774 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-b6g7r"] Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.641205 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-b6g7r"] Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.643445 4768 scope.go:117] "RemoveContainer" containerID="771146c610d5d69262c172f460b0f38a444899f89ec60704281b1b90c0ad5512" Nov 24 18:06:22 crc kubenswrapper[4768]: E1124 18:06:22.644801 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"771146c610d5d69262c172f460b0f38a444899f89ec60704281b1b90c0ad5512\": container with ID starting with 771146c610d5d69262c172f460b0f38a444899f89ec60704281b1b90c0ad5512 not found: ID does not exist" containerID="771146c610d5d69262c172f460b0f38a444899f89ec60704281b1b90c0ad5512" Nov 24 18:06:22 crc kubenswrapper[4768]: I1124 18:06:22.644873 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"771146c610d5d69262c172f460b0f38a444899f89ec60704281b1b90c0ad5512"} err="failed to get container status \"771146c610d5d69262c172f460b0f38a444899f89ec60704281b1b90c0ad5512\": rpc error: code = NotFound desc = could not find container \"771146c610d5d69262c172f460b0f38a444899f89ec60704281b1b90c0ad5512\": container with ID starting with 771146c610d5d69262c172f460b0f38a444899f89ec60704281b1b90c0ad5512 not found: ID does not exist" Nov 24 18:06:23 crc kubenswrapper[4768]: I1124 18:06:23.506541 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" event={"ID":"175cddfa-c51a-40dc-be36-97af7e8b7cc2","Type":"ContainerStarted","Data":"8420e7989603046d91339c3f4e7d49d3d212580a5589625f3b76a80ffa791ad4"} Nov 24 18:06:23 crc kubenswrapper[4768]: I1124 18:06:23.508211 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:23 crc kubenswrapper[4768]: I1124 18:06:23.526465 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" podStartSLOduration=3.526444177 podStartE2EDuration="3.526444177s" podCreationTimestamp="2025-11-24 18:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:23.524221172 +0000 UTC m=+1022.384802949" watchObservedRunningTime="2025-11-24 18:06:23.526444177 +0000 UTC m=+1022.387025954" Nov 24 18:06:23 crc kubenswrapper[4768]: I1124 18:06:23.909153 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3883fa6f-ef18-47ad-9380-33d0c61dba66" path="/var/lib/kubelet/pods/3883fa6f-ef18-47ad-9380-33d0c61dba66/volumes" Nov 24 18:06:25 crc kubenswrapper[4768]: I1124 18:06:25.529187 4768 generic.go:334] "Generic (PLEG): container finished" podID="1e92475a-7cc4-4533-88eb-38f941a8b74e" containerID="f04ebf8f996a285adf0cd06a8f46a7aa50b6ab6900e8cc8628dbb650c28d9869" exitCode=0 Nov 24 18:06:25 crc kubenswrapper[4768]: I1124 18:06:25.529280 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlk8x" event={"ID":"1e92475a-7cc4-4533-88eb-38f941a8b74e","Type":"ContainerDied","Data":"f04ebf8f996a285adf0cd06a8f46a7aa50b6ab6900e8cc8628dbb650c28d9869"} Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.782653 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.923818 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-config-data\") pod \"1e92475a-7cc4-4533-88eb-38f941a8b74e\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.923865 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-scripts\") pod \"1e92475a-7cc4-4533-88eb-38f941a8b74e\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.923937 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xq9t\" (UniqueName: \"kubernetes.io/projected/1e92475a-7cc4-4533-88eb-38f941a8b74e-kube-api-access-4xq9t\") pod \"1e92475a-7cc4-4533-88eb-38f941a8b74e\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.924015 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-combined-ca-bundle\") pod \"1e92475a-7cc4-4533-88eb-38f941a8b74e\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.924057 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-credential-keys\") pod \"1e92475a-7cc4-4533-88eb-38f941a8b74e\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.925056 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-fernet-keys\") pod \"1e92475a-7cc4-4533-88eb-38f941a8b74e\" (UID: \"1e92475a-7cc4-4533-88eb-38f941a8b74e\") " Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.930593 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1e92475a-7cc4-4533-88eb-38f941a8b74e" (UID: "1e92475a-7cc4-4533-88eb-38f941a8b74e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.932311 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1e92475a-7cc4-4533-88eb-38f941a8b74e" (UID: "1e92475a-7cc4-4533-88eb-38f941a8b74e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.934325 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-scripts" (OuterVolumeSpecName: "scripts") pod "1e92475a-7cc4-4533-88eb-38f941a8b74e" (UID: "1e92475a-7cc4-4533-88eb-38f941a8b74e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.936563 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e92475a-7cc4-4533-88eb-38f941a8b74e-kube-api-access-4xq9t" (OuterVolumeSpecName: "kube-api-access-4xq9t") pod "1e92475a-7cc4-4533-88eb-38f941a8b74e" (UID: "1e92475a-7cc4-4533-88eb-38f941a8b74e"). InnerVolumeSpecName "kube-api-access-4xq9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.949966 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e92475a-7cc4-4533-88eb-38f941a8b74e" (UID: "1e92475a-7cc4-4533-88eb-38f941a8b74e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:29 crc kubenswrapper[4768]: I1124 18:06:29.950727 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-config-data" (OuterVolumeSpecName: "config-data") pod "1e92475a-7cc4-4533-88eb-38f941a8b74e" (UID: "1e92475a-7cc4-4533-88eb-38f941a8b74e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.027599 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.027628 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.027638 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xq9t\" (UniqueName: \"kubernetes.io/projected/1e92475a-7cc4-4533-88eb-38f941a8b74e-kube-api-access-4xq9t\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.027649 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.027657 4768 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.027665 4768 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e92475a-7cc4-4533-88eb-38f941a8b74e-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.573694 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlk8x" event={"ID":"1e92475a-7cc4-4533-88eb-38f941a8b74e","Type":"ContainerDied","Data":"bffa8811529c7d7e54a984c1b90e0c8f087655fd77302d59e3bf7165385e9818"} Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.573766 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bffa8811529c7d7e54a984c1b90e0c8f087655fd77302d59e3bf7165385e9818" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.573812 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlk8x" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.820750 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.874727 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hlk8x"] Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.888094 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hlk8x"] Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.900029 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cfpv6"] Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.900462 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" podUID="fd223bd5-4be2-4240-bd86-a72e479be131" containerName="dnsmasq-dns" containerID="cri-o://9670aca0447e91bed48b8acb8636d7b8a53952ca3b86abc67ce05de9ccd1308c" gracePeriod=10 Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.974316 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ttdbg"] Nov 24 18:06:30 crc kubenswrapper[4768]: E1124 18:06:30.975177 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e92475a-7cc4-4533-88eb-38f941a8b74e" containerName="keystone-bootstrap" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.975220 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e92475a-7cc4-4533-88eb-38f941a8b74e" containerName="keystone-bootstrap" Nov 24 18:06:30 crc kubenswrapper[4768]: E1124 18:06:30.975240 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" containerName="dnsmasq-dns" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.975250 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" containerName="dnsmasq-dns" Nov 24 18:06:30 crc kubenswrapper[4768]: E1124 18:06:30.975293 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3883fa6f-ef18-47ad-9380-33d0c61dba66" containerName="init" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.975303 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3883fa6f-ef18-47ad-9380-33d0c61dba66" containerName="init" Nov 24 18:06:30 crc kubenswrapper[4768]: E1124 18:06:30.975321 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" containerName="init" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.975335 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" containerName="init" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.975637 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3883fa6f-ef18-47ad-9380-33d0c61dba66" containerName="init" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.975673 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="16aa8a6c-1cbb-444e-ae9a-f3bb5b74ff5b" containerName="dnsmasq-dns" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.975684 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e92475a-7cc4-4533-88eb-38f941a8b74e" containerName="keystone-bootstrap" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.976683 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.978950 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.981445 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.981686 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.981678 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.982368 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gql6l" Nov 24 18:06:30 crc kubenswrapper[4768]: I1124 18:06:30.998629 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ttdbg"] Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.149284 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-fernet-keys\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.149459 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-credential-keys\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.149503 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbgxn\" (UniqueName: \"kubernetes.io/projected/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-kube-api-access-lbgxn\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.149565 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-config-data\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.149593 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-combined-ca-bundle\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.149738 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-scripts\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.251538 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-fernet-keys\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.251630 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-credential-keys\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.251661 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbgxn\" (UniqueName: \"kubernetes.io/projected/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-kube-api-access-lbgxn\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.251690 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-config-data\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.251716 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-combined-ca-bundle\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.251778 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-scripts\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.256584 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-scripts\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.256886 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-credential-keys\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.259942 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-combined-ca-bundle\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.261135 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-config-data\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.268311 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-fernet-keys\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.275276 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbgxn\" (UniqueName: \"kubernetes.io/projected/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-kube-api-access-lbgxn\") pod \"keystone-bootstrap-ttdbg\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.300236 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.315206 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" podUID="fd223bd5-4be2-4240-bd86-a72e479be131" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.586731 4768 generic.go:334] "Generic (PLEG): container finished" podID="fd223bd5-4be2-4240-bd86-a72e479be131" containerID="9670aca0447e91bed48b8acb8636d7b8a53952ca3b86abc67ce05de9ccd1308c" exitCode=0 Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.586807 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" event={"ID":"fd223bd5-4be2-4240-bd86-a72e479be131","Type":"ContainerDied","Data":"9670aca0447e91bed48b8acb8636d7b8a53952ca3b86abc67ce05de9ccd1308c"} Nov 24 18:06:31 crc kubenswrapper[4768]: I1124 18:06:31.912705 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e92475a-7cc4-4533-88eb-38f941a8b74e" path="/var/lib/kubelet/pods/1e92475a-7cc4-4533-88eb-38f941a8b74e/volumes" Nov 24 18:06:36 crc kubenswrapper[4768]: I1124 18:06:36.314762 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" podUID="fd223bd5-4be2-4240-bd86-a72e479be131" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Nov 24 18:06:41 crc kubenswrapper[4768]: I1124 18:06:41.315475 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" podUID="fd223bd5-4be2-4240-bd86-a72e479be131" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Nov 24 18:06:41 crc kubenswrapper[4768]: I1124 18:06:41.316472 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:06:43 crc kubenswrapper[4768]: E1124 18:06:43.397384 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 24 18:06:43 crc kubenswrapper[4768]: E1124 18:06:43.398045 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64ch578h66h5dh65fhb4h695h675h669hc8h584h65ch66h68ch547hc5h69h79h657h559h59chc6h5bbh65ch579h687h87h5c8hb4h54ch9fh585q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mdvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d0b8cf78-9bbe-44cd-8907-78fd9548d712): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:06:43 crc kubenswrapper[4768]: I1124 18:06:43.679379 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfd4bc52-bb80-45a4-8666-28e28e129c9e" containerID="a48a584276e1be535d7f4be9a5516457657724a67c9945cc82c71fbe13a7e8df" exitCode=0 Nov 24 18:06:43 crc kubenswrapper[4768]: I1124 18:06:43.679451 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-55cpw" event={"ID":"dfd4bc52-bb80-45a4-8666-28e28e129c9e","Type":"ContainerDied","Data":"a48a584276e1be535d7f4be9a5516457657724a67c9945cc82c71fbe13a7e8df"} Nov 24 18:06:44 crc kubenswrapper[4768]: E1124 18:06:44.451654 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 24 18:06:44 crc kubenswrapper[4768]: E1124 18:06:44.452225 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtdfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-wpggd_openstack(8ed13008-e82b-40d6-af72-abfb5a1223fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:06:44 crc kubenswrapper[4768]: E1124 18:06:44.453433 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-wpggd" podUID="8ed13008-e82b-40d6-af72-abfb5a1223fb" Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.689203 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" event={"ID":"fd223bd5-4be2-4240-bd86-a72e479be131","Type":"ContainerDied","Data":"e426b643b3b110bf667d015810eea610e897533d3f5b28ed62dcbc8e7dbcf216"} Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.689260 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e426b643b3b110bf667d015810eea610e897533d3f5b28ed62dcbc8e7dbcf216" Nov 24 18:06:44 crc kubenswrapper[4768]: E1124 18:06:44.691915 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-wpggd" podUID="8ed13008-e82b-40d6-af72-abfb5a1223fb" Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.718473 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.811145 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-config\") pod \"fd223bd5-4be2-4240-bd86-a72e479be131\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.811220 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w6vk\" (UniqueName: \"kubernetes.io/projected/fd223bd5-4be2-4240-bd86-a72e479be131-kube-api-access-8w6vk\") pod \"fd223bd5-4be2-4240-bd86-a72e479be131\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.811313 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-ovsdbserver-nb\") pod \"fd223bd5-4be2-4240-bd86-a72e479be131\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.811379 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-ovsdbserver-sb\") pod \"fd223bd5-4be2-4240-bd86-a72e479be131\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.811422 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-dns-svc\") pod \"fd223bd5-4be2-4240-bd86-a72e479be131\" (UID: \"fd223bd5-4be2-4240-bd86-a72e479be131\") " Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.827747 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd223bd5-4be2-4240-bd86-a72e479be131-kube-api-access-8w6vk" (OuterVolumeSpecName: "kube-api-access-8w6vk") pod "fd223bd5-4be2-4240-bd86-a72e479be131" (UID: "fd223bd5-4be2-4240-bd86-a72e479be131"). InnerVolumeSpecName "kube-api-access-8w6vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.854738 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ttdbg"] Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.873678 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-config" (OuterVolumeSpecName: "config") pod "fd223bd5-4be2-4240-bd86-a72e479be131" (UID: "fd223bd5-4be2-4240-bd86-a72e479be131"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.884450 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fd223bd5-4be2-4240-bd86-a72e479be131" (UID: "fd223bd5-4be2-4240-bd86-a72e479be131"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.890648 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd223bd5-4be2-4240-bd86-a72e479be131" (UID: "fd223bd5-4be2-4240-bd86-a72e479be131"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.899564 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fd223bd5-4be2-4240-bd86-a72e479be131" (UID: "fd223bd5-4be2-4240-bd86-a72e479be131"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.914363 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w6vk\" (UniqueName: \"kubernetes.io/projected/fd223bd5-4be2-4240-bd86-a72e479be131-kube-api-access-8w6vk\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.914400 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.914409 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.914420 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:44 crc kubenswrapper[4768]: I1124 18:06:44.914431 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd223bd5-4be2-4240-bd86-a72e479be131-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:45 crc kubenswrapper[4768]: W1124 18:06:45.129254 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod188f141f_b2a1_4ca5_b86d_ac1c6ea86163.slice/crio-d116b5d60d7b77d502648f0aca0a7499f9a9f5a8bbd3d2b2f98f9b2d788ca60a WatchSource:0}: Error finding container d116b5d60d7b77d502648f0aca0a7499f9a9f5a8bbd3d2b2f98f9b2d788ca60a: Status 404 returned error can't find the container with id d116b5d60d7b77d502648f0aca0a7499f9a9f5a8bbd3d2b2f98f9b2d788ca60a Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.143008 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.320353 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd4bc52-bb80-45a4-8666-28e28e129c9e-combined-ca-bundle\") pod \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\" (UID: \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\") " Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.320785 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8pf8\" (UniqueName: \"kubernetes.io/projected/dfd4bc52-bb80-45a4-8666-28e28e129c9e-kube-api-access-x8pf8\") pod \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\" (UID: \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\") " Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.320867 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dfd4bc52-bb80-45a4-8666-28e28e129c9e-config\") pod \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\" (UID: \"dfd4bc52-bb80-45a4-8666-28e28e129c9e\") " Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.326671 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd4bc52-bb80-45a4-8666-28e28e129c9e-kube-api-access-x8pf8" (OuterVolumeSpecName: "kube-api-access-x8pf8") pod "dfd4bc52-bb80-45a4-8666-28e28e129c9e" (UID: "dfd4bc52-bb80-45a4-8666-28e28e129c9e"). InnerVolumeSpecName "kube-api-access-x8pf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.352985 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd4bc52-bb80-45a4-8666-28e28e129c9e-config" (OuterVolumeSpecName: "config") pod "dfd4bc52-bb80-45a4-8666-28e28e129c9e" (UID: "dfd4bc52-bb80-45a4-8666-28e28e129c9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.356604 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd4bc52-bb80-45a4-8666-28e28e129c9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfd4bc52-bb80-45a4-8666-28e28e129c9e" (UID: "dfd4bc52-bb80-45a4-8666-28e28e129c9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.422598 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd4bc52-bb80-45a4-8666-28e28e129c9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.422641 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8pf8\" (UniqueName: \"kubernetes.io/projected/dfd4bc52-bb80-45a4-8666-28e28e129c9e-kube-api-access-x8pf8\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.422656 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dfd4bc52-bb80-45a4-8666-28e28e129c9e-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.703601 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0b8cf78-9bbe-44cd-8907-78fd9548d712","Type":"ContainerStarted","Data":"756c8026532e94696fa6b9fa0598cd4361a90365b3db883f9864b4341d9ed87d"} Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.706455 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ttdbg" event={"ID":"188f141f-b2a1-4ca5-b86d-ac1c6ea86163","Type":"ContainerStarted","Data":"4a063b3dbd324dc8a5bc04c97bd708430fd95cb5c9bcbee43f3670fb789d12d7"} Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.706549 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ttdbg" event={"ID":"188f141f-b2a1-4ca5-b86d-ac1c6ea86163","Type":"ContainerStarted","Data":"d116b5d60d7b77d502648f0aca0a7499f9a9f5a8bbd3d2b2f98f9b2d788ca60a"} Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.713358 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rgvsd" event={"ID":"738a244f-751e-4d50-8ba2-6a9d122b9a69","Type":"ContainerStarted","Data":"bf0be984b8c32428988def99e6e7e0103a33e28e64b3345449a381d730e02c78"} Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.725936 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xs4kv" event={"ID":"093bb01a-1d6c-43cb-a0f0-7868857e241a","Type":"ContainerStarted","Data":"ea3271f5abd5164ffeb18b5ce7cbc8a2dfceb2c593ea4a75cbb1b4f17d56e371"} Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.728322 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-55cpw" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.728335 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-cfpv6" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.728315 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-55cpw" event={"ID":"dfd4bc52-bb80-45a4-8666-28e28e129c9e","Type":"ContainerDied","Data":"914ed618e374689bae60592fc626a36f92b40bce12ede78adeeed6dd877fd001"} Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.728538 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="914ed618e374689bae60592fc626a36f92b40bce12ede78adeeed6dd877fd001" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.739600 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ttdbg" podStartSLOduration=15.739568247 podStartE2EDuration="15.739568247s" podCreationTimestamp="2025-11-24 18:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:45.736446854 +0000 UTC m=+1044.597028631" watchObservedRunningTime="2025-11-24 18:06:45.739568247 +0000 UTC m=+1044.600150044" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.765143 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rgvsd" podStartSLOduration=2.782553598 podStartE2EDuration="25.765112485s" podCreationTimestamp="2025-11-24 18:06:20 +0000 UTC" firstStartedPulling="2025-11-24 18:06:21.414038671 +0000 UTC m=+1020.274620448" lastFinishedPulling="2025-11-24 18:06:44.396597558 +0000 UTC m=+1043.257179335" observedRunningTime="2025-11-24 18:06:45.761029227 +0000 UTC m=+1044.621611024" watchObservedRunningTime="2025-11-24 18:06:45.765112485 +0000 UTC m=+1044.625694272" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.784617 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xs4kv" podStartSLOduration=2.658690994 podStartE2EDuration="25.784587803s" podCreationTimestamp="2025-11-24 18:06:20 +0000 UTC" firstStartedPulling="2025-11-24 18:06:21.291281145 +0000 UTC m=+1020.151862922" lastFinishedPulling="2025-11-24 18:06:44.417177934 +0000 UTC m=+1043.277759731" observedRunningTime="2025-11-24 18:06:45.775401629 +0000 UTC m=+1044.635983406" watchObservedRunningTime="2025-11-24 18:06:45.784587803 +0000 UTC m=+1044.645169600" Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.799343 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cfpv6"] Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.805744 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cfpv6"] Nov 24 18:06:45 crc kubenswrapper[4768]: I1124 18:06:45.960558 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd223bd5-4be2-4240-bd86-a72e479be131" path="/var/lib/kubelet/pods/fd223bd5-4be2-4240-bd86-a72e479be131/volumes" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.025231 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-9hkml"] Nov 24 18:06:46 crc kubenswrapper[4768]: E1124 18:06:46.025706 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd223bd5-4be2-4240-bd86-a72e479be131" containerName="init" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.025721 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd223bd5-4be2-4240-bd86-a72e479be131" containerName="init" Nov 24 18:06:46 crc kubenswrapper[4768]: E1124 18:06:46.025738 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd223bd5-4be2-4240-bd86-a72e479be131" containerName="dnsmasq-dns" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.025744 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd223bd5-4be2-4240-bd86-a72e479be131" containerName="dnsmasq-dns" Nov 24 18:06:46 crc kubenswrapper[4768]: E1124 18:06:46.025772 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd4bc52-bb80-45a4-8666-28e28e129c9e" containerName="neutron-db-sync" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.025778 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd4bc52-bb80-45a4-8666-28e28e129c9e" containerName="neutron-db-sync" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.025919 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd4bc52-bb80-45a4-8666-28e28e129c9e" containerName="neutron-db-sync" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.025942 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd223bd5-4be2-4240-bd86-a72e479be131" containerName="dnsmasq-dns" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.026851 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.041404 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-9hkml"] Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.064862 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c9cf6cc78-ssqjz"] Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.068633 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.072056 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-tnchn" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.072284 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.072398 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.072945 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.085571 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c9cf6cc78-ssqjz"] Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.144175 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89g6h\" (UniqueName: \"kubernetes.io/projected/dfe35d6f-6397-4460-8366-07504f40963f-kube-api-access-89g6h\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.144309 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.144347 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-config\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.144838 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-dns-svc\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.144999 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.246589 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-config\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.246650 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89g6h\" (UniqueName: \"kubernetes.io/projected/dfe35d6f-6397-4460-8366-07504f40963f-kube-api-access-89g6h\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.246700 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-combined-ca-bundle\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.246727 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.246748 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-config\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.246767 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-ovndb-tls-certs\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.246795 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-dns-svc\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.246830 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.246849 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qcsd\" (UniqueName: \"kubernetes.io/projected/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-kube-api-access-6qcsd\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.246901 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-httpd-config\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.247661 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.247678 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-config\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.247735 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.247982 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-dns-svc\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.268987 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89g6h\" (UniqueName: \"kubernetes.io/projected/dfe35d6f-6397-4460-8366-07504f40963f-kube-api-access-89g6h\") pod \"dnsmasq-dns-7b946d459c-9hkml\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.348731 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-combined-ca-bundle\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.348790 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-ovndb-tls-certs\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.348841 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qcsd\" (UniqueName: \"kubernetes.io/projected/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-kube-api-access-6qcsd\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.348893 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-httpd-config\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.348918 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-config\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.354244 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-ovndb-tls-certs\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.365571 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.365872 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-httpd-config\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.368601 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-config\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.370101 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-combined-ca-bundle\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.370543 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qcsd\" (UniqueName: \"kubernetes.io/projected/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-kube-api-access-6qcsd\") pod \"neutron-5c9cf6cc78-ssqjz\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.392125 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:46 crc kubenswrapper[4768]: I1124 18:06:46.880098 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-9hkml"] Nov 24 18:06:47 crc kubenswrapper[4768]: I1124 18:06:47.088407 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c9cf6cc78-ssqjz"] Nov 24 18:06:47 crc kubenswrapper[4768]: W1124 18:06:47.108899 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8e2ca70_ec3c_4578_9f19_c05d6bb47fb2.slice/crio-a7aadbb1366cff9530f57ca69991ffde95befaa9ede69671308f86926c1d9d41 WatchSource:0}: Error finding container a7aadbb1366cff9530f57ca69991ffde95befaa9ede69671308f86926c1d9d41: Status 404 returned error can't find the container with id a7aadbb1366cff9530f57ca69991ffde95befaa9ede69671308f86926c1d9d41 Nov 24 18:06:47 crc kubenswrapper[4768]: E1124 18:06:47.379560 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfe35d6f_6397_4460_8366_07504f40963f.slice/crio-conmon-4b642ee2248e7d6abbc888a578977ddd2615490080844f67ec4dc83240208847.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfe35d6f_6397_4460_8366_07504f40963f.slice/crio-4b642ee2248e7d6abbc888a578977ddd2615490080844f67ec4dc83240208847.scope\": RecentStats: unable to find data in memory cache]" Nov 24 18:06:47 crc kubenswrapper[4768]: I1124 18:06:47.758207 4768 generic.go:334] "Generic (PLEG): container finished" podID="738a244f-751e-4d50-8ba2-6a9d122b9a69" containerID="bf0be984b8c32428988def99e6e7e0103a33e28e64b3345449a381d730e02c78" exitCode=0 Nov 24 18:06:47 crc kubenswrapper[4768]: I1124 18:06:47.758287 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rgvsd" event={"ID":"738a244f-751e-4d50-8ba2-6a9d122b9a69","Type":"ContainerDied","Data":"bf0be984b8c32428988def99e6e7e0103a33e28e64b3345449a381d730e02c78"} Nov 24 18:06:47 crc kubenswrapper[4768]: I1124 18:06:47.781858 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfe35d6f-6397-4460-8366-07504f40963f" containerID="4b642ee2248e7d6abbc888a578977ddd2615490080844f67ec4dc83240208847" exitCode=0 Nov 24 18:06:47 crc kubenswrapper[4768]: I1124 18:06:47.781948 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-9hkml" event={"ID":"dfe35d6f-6397-4460-8366-07504f40963f","Type":"ContainerDied","Data":"4b642ee2248e7d6abbc888a578977ddd2615490080844f67ec4dc83240208847"} Nov 24 18:06:47 crc kubenswrapper[4768]: I1124 18:06:47.781973 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-9hkml" event={"ID":"dfe35d6f-6397-4460-8366-07504f40963f","Type":"ContainerStarted","Data":"cb7c61467d62bf84f9c9299402fff2b94d9b34a2ca23e6d3092a346b1cde91f8"} Nov 24 18:06:47 crc kubenswrapper[4768]: I1124 18:06:47.804439 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c9cf6cc78-ssqjz" event={"ID":"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2","Type":"ContainerStarted","Data":"9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87"} Nov 24 18:06:47 crc kubenswrapper[4768]: I1124 18:06:47.804934 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c9cf6cc78-ssqjz" event={"ID":"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2","Type":"ContainerStarted","Data":"3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd"} Nov 24 18:06:47 crc kubenswrapper[4768]: I1124 18:06:47.804951 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c9cf6cc78-ssqjz" event={"ID":"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2","Type":"ContainerStarted","Data":"a7aadbb1366cff9530f57ca69991ffde95befaa9ede69671308f86926c1d9d41"} Nov 24 18:06:47 crc kubenswrapper[4768]: I1124 18:06:47.805194 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:06:47 crc kubenswrapper[4768]: I1124 18:06:47.843600 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c9cf6cc78-ssqjz" podStartSLOduration=1.843574261 podStartE2EDuration="1.843574261s" podCreationTimestamp="2025-11-24 18:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:47.826769144 +0000 UTC m=+1046.687350921" watchObservedRunningTime="2025-11-24 18:06:47.843574261 +0000 UTC m=+1046.704156038" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.329717 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-844dbf79df-5t2np"] Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.331617 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.333824 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.334003 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.353566 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-844dbf79df-5t2np"] Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.414693 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-httpd-config\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.414793 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-public-tls-certs\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.414859 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-internal-tls-certs\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.414883 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzbqj\" (UniqueName: \"kubernetes.io/projected/6f9024a7-971e-460c-8b41-157dc2403a44-kube-api-access-gzbqj\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.414967 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-ovndb-tls-certs\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.414996 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-combined-ca-bundle\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.415032 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-config\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.515511 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-combined-ca-bundle\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.515564 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-config\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.515596 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-httpd-config\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.515660 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-public-tls-certs\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.516727 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-internal-tls-certs\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.516754 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzbqj\" (UniqueName: \"kubernetes.io/projected/6f9024a7-971e-460c-8b41-157dc2403a44-kube-api-access-gzbqj\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.516811 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-ovndb-tls-certs\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.520087 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-config\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.522267 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-combined-ca-bundle\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.522402 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-internal-tls-certs\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.522811 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-ovndb-tls-certs\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.522996 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-httpd-config\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.525748 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f9024a7-971e-460c-8b41-157dc2403a44-public-tls-certs\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.533575 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzbqj\" (UniqueName: \"kubernetes.io/projected/6f9024a7-971e-460c-8b41-157dc2403a44-kube-api-access-gzbqj\") pod \"neutron-844dbf79df-5t2np\" (UID: \"6f9024a7-971e-460c-8b41-157dc2403a44\") " pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.670114 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.813699 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-9hkml" event={"ID":"dfe35d6f-6397-4460-8366-07504f40963f","Type":"ContainerStarted","Data":"e0dcd4b10a8fad097eaad004133aad8be95f8d23a3c18d93edfac7053ba05500"} Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.814997 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.817731 4768 generic.go:334] "Generic (PLEG): container finished" podID="188f141f-b2a1-4ca5-b86d-ac1c6ea86163" containerID="4a063b3dbd324dc8a5bc04c97bd708430fd95cb5c9bcbee43f3670fb789d12d7" exitCode=0 Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.817833 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ttdbg" event={"ID":"188f141f-b2a1-4ca5-b86d-ac1c6ea86163","Type":"ContainerDied","Data":"4a063b3dbd324dc8a5bc04c97bd708430fd95cb5c9bcbee43f3670fb789d12d7"} Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.825000 4768 generic.go:334] "Generic (PLEG): container finished" podID="093bb01a-1d6c-43cb-a0f0-7868857e241a" containerID="ea3271f5abd5164ffeb18b5ce7cbc8a2dfceb2c593ea4a75cbb1b4f17d56e371" exitCode=0 Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.825219 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xs4kv" event={"ID":"093bb01a-1d6c-43cb-a0f0-7868857e241a","Type":"ContainerDied","Data":"ea3271f5abd5164ffeb18b5ce7cbc8a2dfceb2c593ea4a75cbb1b4f17d56e371"} Nov 24 18:06:48 crc kubenswrapper[4768]: I1124 18:06:48.833874 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-9hkml" podStartSLOduration=3.833825077 podStartE2EDuration="3.833825077s" podCreationTimestamp="2025-11-24 18:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:48.828969618 +0000 UTC m=+1047.689551395" watchObservedRunningTime="2025-11-24 18:06:48.833825077 +0000 UTC m=+1047.694406854" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.281012 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.290728 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.300236 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.349424 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-credential-keys\") pod \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351099 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-fernet-keys\") pod \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351175 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/093bb01a-1d6c-43cb-a0f0-7868857e241a-db-sync-config-data\") pod \"093bb01a-1d6c-43cb-a0f0-7868857e241a\" (UID: \"093bb01a-1d6c-43cb-a0f0-7868857e241a\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351247 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crw6l\" (UniqueName: \"kubernetes.io/projected/738a244f-751e-4d50-8ba2-6a9d122b9a69-kube-api-access-crw6l\") pod \"738a244f-751e-4d50-8ba2-6a9d122b9a69\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351328 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-combined-ca-bundle\") pod \"738a244f-751e-4d50-8ba2-6a9d122b9a69\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351362 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-scripts\") pod \"738a244f-751e-4d50-8ba2-6a9d122b9a69\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351390 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738a244f-751e-4d50-8ba2-6a9d122b9a69-logs\") pod \"738a244f-751e-4d50-8ba2-6a9d122b9a69\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351445 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-config-data\") pod \"738a244f-751e-4d50-8ba2-6a9d122b9a69\" (UID: \"738a244f-751e-4d50-8ba2-6a9d122b9a69\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351463 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-combined-ca-bundle\") pod \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351502 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hrts\" (UniqueName: \"kubernetes.io/projected/093bb01a-1d6c-43cb-a0f0-7868857e241a-kube-api-access-6hrts\") pod \"093bb01a-1d6c-43cb-a0f0-7868857e241a\" (UID: \"093bb01a-1d6c-43cb-a0f0-7868857e241a\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351532 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093bb01a-1d6c-43cb-a0f0-7868857e241a-combined-ca-bundle\") pod \"093bb01a-1d6c-43cb-a0f0-7868857e241a\" (UID: \"093bb01a-1d6c-43cb-a0f0-7868857e241a\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351567 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbgxn\" (UniqueName: \"kubernetes.io/projected/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-kube-api-access-lbgxn\") pod \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351586 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-config-data\") pod \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.351614 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-scripts\") pod \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\" (UID: \"188f141f-b2a1-4ca5-b86d-ac1c6ea86163\") " Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.352180 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/738a244f-751e-4d50-8ba2-6a9d122b9a69-logs" (OuterVolumeSpecName: "logs") pod "738a244f-751e-4d50-8ba2-6a9d122b9a69" (UID: "738a244f-751e-4d50-8ba2-6a9d122b9a69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.356961 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "188f141f-b2a1-4ca5-b86d-ac1c6ea86163" (UID: "188f141f-b2a1-4ca5-b86d-ac1c6ea86163"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.358224 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-scripts" (OuterVolumeSpecName: "scripts") pod "738a244f-751e-4d50-8ba2-6a9d122b9a69" (UID: "738a244f-751e-4d50-8ba2-6a9d122b9a69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.358862 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-kube-api-access-lbgxn" (OuterVolumeSpecName: "kube-api-access-lbgxn") pod "188f141f-b2a1-4ca5-b86d-ac1c6ea86163" (UID: "188f141f-b2a1-4ca5-b86d-ac1c6ea86163"). InnerVolumeSpecName "kube-api-access-lbgxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.361737 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/093bb01a-1d6c-43cb-a0f0-7868857e241a-kube-api-access-6hrts" (OuterVolumeSpecName: "kube-api-access-6hrts") pod "093bb01a-1d6c-43cb-a0f0-7868857e241a" (UID: "093bb01a-1d6c-43cb-a0f0-7868857e241a"). InnerVolumeSpecName "kube-api-access-6hrts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.362213 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/738a244f-751e-4d50-8ba2-6a9d122b9a69-kube-api-access-crw6l" (OuterVolumeSpecName: "kube-api-access-crw6l") pod "738a244f-751e-4d50-8ba2-6a9d122b9a69" (UID: "738a244f-751e-4d50-8ba2-6a9d122b9a69"). InnerVolumeSpecName "kube-api-access-crw6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.369524 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/093bb01a-1d6c-43cb-a0f0-7868857e241a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "093bb01a-1d6c-43cb-a0f0-7868857e241a" (UID: "093bb01a-1d6c-43cb-a0f0-7868857e241a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.377569 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-scripts" (OuterVolumeSpecName: "scripts") pod "188f141f-b2a1-4ca5-b86d-ac1c6ea86163" (UID: "188f141f-b2a1-4ca5-b86d-ac1c6ea86163"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.377764 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "188f141f-b2a1-4ca5-b86d-ac1c6ea86163" (UID: "188f141f-b2a1-4ca5-b86d-ac1c6ea86163"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.386278 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-config-data" (OuterVolumeSpecName: "config-data") pod "188f141f-b2a1-4ca5-b86d-ac1c6ea86163" (UID: "188f141f-b2a1-4ca5-b86d-ac1c6ea86163"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.386737 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-config-data" (OuterVolumeSpecName: "config-data") pod "738a244f-751e-4d50-8ba2-6a9d122b9a69" (UID: "738a244f-751e-4d50-8ba2-6a9d122b9a69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.387188 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "738a244f-751e-4d50-8ba2-6a9d122b9a69" (UID: "738a244f-751e-4d50-8ba2-6a9d122b9a69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.389831 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/093bb01a-1d6c-43cb-a0f0-7868857e241a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "093bb01a-1d6c-43cb-a0f0-7868857e241a" (UID: "093bb01a-1d6c-43cb-a0f0-7868857e241a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.393609 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "188f141f-b2a1-4ca5-b86d-ac1c6ea86163" (UID: "188f141f-b2a1-4ca5-b86d-ac1c6ea86163"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453688 4768 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453719 4768 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453728 4768 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/093bb01a-1d6c-43cb-a0f0-7868857e241a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453738 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crw6l\" (UniqueName: \"kubernetes.io/projected/738a244f-751e-4d50-8ba2-6a9d122b9a69-kube-api-access-crw6l\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453749 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453759 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453768 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/738a244f-751e-4d50-8ba2-6a9d122b9a69-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453777 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738a244f-751e-4d50-8ba2-6a9d122b9a69-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453786 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453796 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hrts\" (UniqueName: \"kubernetes.io/projected/093bb01a-1d6c-43cb-a0f0-7868857e241a-kube-api-access-6hrts\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453804 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093bb01a-1d6c-43cb-a0f0-7868857e241a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453813 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbgxn\" (UniqueName: \"kubernetes.io/projected/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-kube-api-access-lbgxn\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453847 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.453857 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/188f141f-b2a1-4ca5-b86d-ac1c6ea86163-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.840860 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ttdbg" event={"ID":"188f141f-b2a1-4ca5-b86d-ac1c6ea86163","Type":"ContainerDied","Data":"d116b5d60d7b77d502648f0aca0a7499f9a9f5a8bbd3d2b2f98f9b2d788ca60a"} Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.840918 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d116b5d60d7b77d502648f0aca0a7499f9a9f5a8bbd3d2b2f98f9b2d788ca60a" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.840997 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ttdbg" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.847902 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rgvsd" event={"ID":"738a244f-751e-4d50-8ba2-6a9d122b9a69","Type":"ContainerDied","Data":"7295704fa0b0d1b3f5019c7504a2beb3fa4136ca87b047323210e08930ee497b"} Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.847951 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7295704fa0b0d1b3f5019c7504a2beb3fa4136ca87b047323210e08930ee497b" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.848010 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rgvsd" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.852610 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xs4kv" Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.852623 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xs4kv" event={"ID":"093bb01a-1d6c-43cb-a0f0-7868857e241a","Type":"ContainerDied","Data":"b1c2d031e71a4854b760e603c9db183ab10a77fb02218425fc50d64186fcf111"} Nov 24 18:06:50 crc kubenswrapper[4768]: I1124 18:06:50.852656 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1c2d031e71a4854b760e603c9db183ab10a77fb02218425fc50d64186fcf111" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.033454 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-56748c45b5-4df84"] Nov 24 18:06:51 crc kubenswrapper[4768]: E1124 18:06:51.033947 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738a244f-751e-4d50-8ba2-6a9d122b9a69" containerName="placement-db-sync" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.033971 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="738a244f-751e-4d50-8ba2-6a9d122b9a69" containerName="placement-db-sync" Nov 24 18:06:51 crc kubenswrapper[4768]: E1124 18:06:51.034003 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="093bb01a-1d6c-43cb-a0f0-7868857e241a" containerName="barbican-db-sync" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.034015 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="093bb01a-1d6c-43cb-a0f0-7868857e241a" containerName="barbican-db-sync" Nov 24 18:06:51 crc kubenswrapper[4768]: E1124 18:06:51.034031 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188f141f-b2a1-4ca5-b86d-ac1c6ea86163" containerName="keystone-bootstrap" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.034039 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="188f141f-b2a1-4ca5-b86d-ac1c6ea86163" containerName="keystone-bootstrap" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.034253 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="738a244f-751e-4d50-8ba2-6a9d122b9a69" containerName="placement-db-sync" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.034278 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="093bb01a-1d6c-43cb-a0f0-7868857e241a" containerName="barbican-db-sync" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.034296 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="188f141f-b2a1-4ca5-b86d-ac1c6ea86163" containerName="keystone-bootstrap" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.034959 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.037684 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gql6l" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.037932 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.038056 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.038271 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.038532 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.039137 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56748c45b5-4df84"] Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.041696 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.081868 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-public-tls-certs\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.082251 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-combined-ca-bundle\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.082342 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-fernet-keys\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.082394 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzp5\" (UniqueName: \"kubernetes.io/projected/434c7b39-9f1a-4032-b6fb-41c315a3a521-kube-api-access-7kzp5\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.082440 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-internal-tls-certs\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.082587 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-config-data\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.082684 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-scripts\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.082753 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-credential-keys\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.162616 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-97698dcdb-54zqg"] Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.165037 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.172833 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.172937 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-grx7k" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.173206 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.179980 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-97698dcdb-54zqg"] Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.184691 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kzp5\" (UniqueName: \"kubernetes.io/projected/434c7b39-9f1a-4032-b6fb-41c315a3a521-kube-api-access-7kzp5\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.184739 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-internal-tls-certs\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.184794 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-config-data\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.184830 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-scripts\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.184864 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-credential-keys\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.184897 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cb6b015-ae5e-438f-9aec-c25982a2febc-config-data-custom\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.184922 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxj6t\" (UniqueName: \"kubernetes.io/projected/5cb6b015-ae5e-438f-9aec-c25982a2febc-kube-api-access-rxj6t\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.184960 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-public-tls-certs\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.184990 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cb6b015-ae5e-438f-9aec-c25982a2febc-logs\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.185017 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb6b015-ae5e-438f-9aec-c25982a2febc-config-data\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.185049 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-combined-ca-bundle\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.185070 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb6b015-ae5e-438f-9aec-c25982a2febc-combined-ca-bundle\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.185111 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-fernet-keys\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.190917 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-b7d468cdf-9fjfm"] Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.192270 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-internal-tls-certs\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.192366 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.194986 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.200017 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-public-tls-certs\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.200853 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-credential-keys\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.202078 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-scripts\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.203873 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-config-data\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.203935 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-b7d468cdf-9fjfm"] Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.206886 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-fernet-keys\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.207403 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434c7b39-9f1a-4032-b6fb-41c315a3a521-combined-ca-bundle\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.219199 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kzp5\" (UniqueName: \"kubernetes.io/projected/434c7b39-9f1a-4032-b6fb-41c315a3a521-kube-api-access-7kzp5\") pod \"keystone-56748c45b5-4df84\" (UID: \"434c7b39-9f1a-4032-b6fb-41c315a3a521\") " pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.269823 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-9hkml"] Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.290377 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cb6b015-ae5e-438f-9aec-c25982a2febc-logs\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.290462 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb6b015-ae5e-438f-9aec-c25982a2febc-config-data\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.290532 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb6b015-ae5e-438f-9aec-c25982a2febc-combined-ca-bundle\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.290584 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-config-data\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.290666 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqgk9\" (UniqueName: \"kubernetes.io/projected/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-kube-api-access-xqgk9\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.290787 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-combined-ca-bundle\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.290808 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-logs\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.290888 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cb6b015-ae5e-438f-9aec-c25982a2febc-config-data-custom\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.290908 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxj6t\" (UniqueName: \"kubernetes.io/projected/5cb6b015-ae5e-438f-9aec-c25982a2febc-kube-api-access-rxj6t\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.290944 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-config-data-custom\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.291448 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cb6b015-ae5e-438f-9aec-c25982a2febc-logs\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.303989 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cb6b015-ae5e-438f-9aec-c25982a2febc-config-data-custom\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.307738 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb6b015-ae5e-438f-9aec-c25982a2febc-config-data\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.314776 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-r57xq"] Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.317297 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.319235 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb6b015-ae5e-438f-9aec-c25982a2febc-combined-ca-bundle\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.328721 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxj6t\" (UniqueName: \"kubernetes.io/projected/5cb6b015-ae5e-438f-9aec-c25982a2febc-kube-api-access-rxj6t\") pod \"barbican-keystone-listener-97698dcdb-54zqg\" (UID: \"5cb6b015-ae5e-438f-9aec-c25982a2febc\") " pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.357976 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.361384 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-r57xq"] Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.394294 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-config\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.394327 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-config-data-custom\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.394347 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.394369 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-dns-svc\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.394431 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-config-data\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.394466 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqgk9\" (UniqueName: \"kubernetes.io/projected/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-kube-api-access-xqgk9\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.394505 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tmj2\" (UniqueName: \"kubernetes.io/projected/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-kube-api-access-6tmj2\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.398529 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.398649 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-combined-ca-bundle\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.398685 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-logs\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.399308 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-logs\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.411049 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-config-data\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.416313 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-combined-ca-bundle\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.426155 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-config-data-custom\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.433414 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78f464b796-86kkm"] Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.435189 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqgk9\" (UniqueName: \"kubernetes.io/projected/b343e1cc-a6b5-4074-98b3-a4bddb9b2730-kube-api-access-xqgk9\") pod \"barbican-worker-b7d468cdf-9fjfm\" (UID: \"b343e1cc-a6b5-4074-98b3-a4bddb9b2730\") " pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.435990 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.439923 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.450184 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78f464b796-86kkm"] Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.499861 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-logs\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.499941 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl8wr\" (UniqueName: \"kubernetes.io/projected/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-kube-api-access-xl8wr\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.499966 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tmj2\" (UniqueName: \"kubernetes.io/projected/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-kube-api-access-6tmj2\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.499986 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-config-data\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.500004 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.500036 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-combined-ca-bundle\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.500132 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-config\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.500162 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.500178 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-dns-svc\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.500205 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-config-data-custom\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.501933 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-config\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.502182 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.502901 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-dns-svc\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.503325 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.525466 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-76b54949f4-59kjn"] Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.527360 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.528203 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tmj2\" (UniqueName: \"kubernetes.io/projected/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-kube-api-access-6tmj2\") pod \"dnsmasq-dns-6bb684768f-r57xq\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.530265 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.531429 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.532914 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.533160 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9hmhx" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.533241 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.537253 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-76b54949f4-59kjn"] Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.589858 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.602249 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c2665c-ef67-4325-bad9-7e42cf3195bd-logs\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.602335 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-config-data\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.602377 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-config-data-custom\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.602424 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qg2b\" (UniqueName: \"kubernetes.io/projected/43c2665c-ef67-4325-bad9-7e42cf3195bd-kube-api-access-5qg2b\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.602522 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-logs\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.602551 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-public-tls-certs\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.602592 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-scripts\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.602630 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl8wr\" (UniqueName: \"kubernetes.io/projected/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-kube-api-access-xl8wr\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.602660 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-combined-ca-bundle\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.602691 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-internal-tls-certs\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.602743 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-config-data\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.602942 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-combined-ca-bundle\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.603088 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-logs\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.605549 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-config-data-custom\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.607948 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-combined-ca-bundle\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.609802 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-config-data\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.627034 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl8wr\" (UniqueName: \"kubernetes.io/projected/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-kube-api-access-xl8wr\") pod \"barbican-api-78f464b796-86kkm\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.704944 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c2665c-ef67-4325-bad9-7e42cf3195bd-logs\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.705020 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-config-data\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.705079 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qg2b\" (UniqueName: \"kubernetes.io/projected/43c2665c-ef67-4325-bad9-7e42cf3195bd-kube-api-access-5qg2b\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.705108 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-public-tls-certs\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.705147 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-scripts\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.705179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-combined-ca-bundle\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.705199 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-internal-tls-certs\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.705651 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c2665c-ef67-4325-bad9-7e42cf3195bd-logs\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.709682 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-b7d468cdf-9fjfm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.709860 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-config-data\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.710071 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-scripts\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.710112 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-combined-ca-bundle\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.716173 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-public-tls-certs\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.717247 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c2665c-ef67-4325-bad9-7e42cf3195bd-internal-tls-certs\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.721646 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qg2b\" (UniqueName: \"kubernetes.io/projected/43c2665c-ef67-4325-bad9-7e42cf3195bd-kube-api-access-5qg2b\") pod \"placement-76b54949f4-59kjn\" (UID: \"43c2665c-ef67-4325-bad9-7e42cf3195bd\") " pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.789710 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.801687 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.843512 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:51 crc kubenswrapper[4768]: I1124 18:06:51.858984 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-9hkml" podUID="dfe35d6f-6397-4460-8366-07504f40963f" containerName="dnsmasq-dns" containerID="cri-o://e0dcd4b10a8fad097eaad004133aad8be95f8d23a3c18d93edfac7053ba05500" gracePeriod=10 Nov 24 18:06:52 crc kubenswrapper[4768]: I1124 18:06:52.869272 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfe35d6f-6397-4460-8366-07504f40963f" containerID="e0dcd4b10a8fad097eaad004133aad8be95f8d23a3c18d93edfac7053ba05500" exitCode=0 Nov 24 18:06:52 crc kubenswrapper[4768]: I1124 18:06:52.869330 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-9hkml" event={"ID":"dfe35d6f-6397-4460-8366-07504f40963f","Type":"ContainerDied","Data":"e0dcd4b10a8fad097eaad004133aad8be95f8d23a3c18d93edfac7053ba05500"} Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.724330 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.748031 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-ovsdbserver-sb\") pod \"dfe35d6f-6397-4460-8366-07504f40963f\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.748177 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89g6h\" (UniqueName: \"kubernetes.io/projected/dfe35d6f-6397-4460-8366-07504f40963f-kube-api-access-89g6h\") pod \"dfe35d6f-6397-4460-8366-07504f40963f\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.748297 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-ovsdbserver-nb\") pod \"dfe35d6f-6397-4460-8366-07504f40963f\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.748326 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-config\") pod \"dfe35d6f-6397-4460-8366-07504f40963f\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.748388 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-dns-svc\") pod \"dfe35d6f-6397-4460-8366-07504f40963f\" (UID: \"dfe35d6f-6397-4460-8366-07504f40963f\") " Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.756323 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe35d6f-6397-4460-8366-07504f40963f-kube-api-access-89g6h" (OuterVolumeSpecName: "kube-api-access-89g6h") pod "dfe35d6f-6397-4460-8366-07504f40963f" (UID: "dfe35d6f-6397-4460-8366-07504f40963f"). InnerVolumeSpecName "kube-api-access-89g6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.821992 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-config" (OuterVolumeSpecName: "config") pod "dfe35d6f-6397-4460-8366-07504f40963f" (UID: "dfe35d6f-6397-4460-8366-07504f40963f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.822847 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dfe35d6f-6397-4460-8366-07504f40963f" (UID: "dfe35d6f-6397-4460-8366-07504f40963f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.825244 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dfe35d6f-6397-4460-8366-07504f40963f" (UID: "dfe35d6f-6397-4460-8366-07504f40963f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.842970 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dfe35d6f-6397-4460-8366-07504f40963f" (UID: "dfe35d6f-6397-4460-8366-07504f40963f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.849965 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89g6h\" (UniqueName: \"kubernetes.io/projected/dfe35d6f-6397-4460-8366-07504f40963f-kube-api-access-89g6h\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.850003 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.850016 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.850026 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.850034 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe35d6f-6397-4460-8366-07504f40963f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.882025 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-9hkml" event={"ID":"dfe35d6f-6397-4460-8366-07504f40963f","Type":"ContainerDied","Data":"cb7c61467d62bf84f9c9299402fff2b94d9b34a2ca23e6d3092a346b1cde91f8"} Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.882080 4768 scope.go:117] "RemoveContainer" containerID="e0dcd4b10a8fad097eaad004133aad8be95f8d23a3c18d93edfac7053ba05500" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.882200 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-9hkml" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.888126 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0b8cf78-9bbe-44cd-8907-78fd9548d712","Type":"ContainerStarted","Data":"9b37a8519bd267bcba127733196da828e65626303691e6ce5e84b3d746b30ea9"} Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.913041 4768 scope.go:117] "RemoveContainer" containerID="4b642ee2248e7d6abbc888a578977ddd2615490080844f67ec4dc83240208847" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.945435 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7cbf4cbf68-zhhj4"] Nov 24 18:06:53 crc kubenswrapper[4768]: E1124 18:06:53.945946 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe35d6f-6397-4460-8366-07504f40963f" containerName="init" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.945958 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe35d6f-6397-4460-8366-07504f40963f" containerName="init" Nov 24 18:06:53 crc kubenswrapper[4768]: E1124 18:06:53.945968 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe35d6f-6397-4460-8366-07504f40963f" containerName="dnsmasq-dns" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.945975 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe35d6f-6397-4460-8366-07504f40963f" containerName="dnsmasq-dns" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.946161 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe35d6f-6397-4460-8366-07504f40963f" containerName="dnsmasq-dns" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.947036 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.950169 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.950336 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.953219 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-9hkml"] Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.960435 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-9hkml"] Nov 24 18:06:53 crc kubenswrapper[4768]: I1124 18:06:53.967182 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cbf4cbf68-zhhj4"] Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.055687 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22661dfe-b7e1-4894-ae13-dab13e09c845-logs\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.055743 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-config-data-custom\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.055788 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-public-tls-certs\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.056240 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-combined-ca-bundle\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.056555 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-config-data\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.056627 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-internal-tls-certs\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.056649 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqnm5\" (UniqueName: \"kubernetes.io/projected/22661dfe-b7e1-4894-ae13-dab13e09c845-kube-api-access-tqnm5\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.063829 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-b7d468cdf-9fjfm"] Nov 24 18:06:54 crc kubenswrapper[4768]: W1124 18:06:54.065137 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb343e1cc_a6b5_4074_98b3_a4bddb9b2730.slice/crio-0d3896fc63eb7340a7a9c0c33568a6c95d08566c1d424fa975df58a7b052dee1 WatchSource:0}: Error finding container 0d3896fc63eb7340a7a9c0c33568a6c95d08566c1d424fa975df58a7b052dee1: Status 404 returned error can't find the container with id 0d3896fc63eb7340a7a9c0c33568a6c95d08566c1d424fa975df58a7b052dee1 Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.146716 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-844dbf79df-5t2np"] Nov 24 18:06:54 crc kubenswrapper[4768]: W1124 18:06:54.153915 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f9024a7_971e_460c_8b41_157dc2403a44.slice/crio-2cad2c03a8feb2de7aa0cbe04e1d1337850a2fcebf0be2e9308afa7bf5eda0ec WatchSource:0}: Error finding container 2cad2c03a8feb2de7aa0cbe04e1d1337850a2fcebf0be2e9308afa7bf5eda0ec: Status 404 returned error can't find the container with id 2cad2c03a8feb2de7aa0cbe04e1d1337850a2fcebf0be2e9308afa7bf5eda0ec Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.158026 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-config-data\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.158096 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-internal-tls-certs\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.158125 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqnm5\" (UniqueName: \"kubernetes.io/projected/22661dfe-b7e1-4894-ae13-dab13e09c845-kube-api-access-tqnm5\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.158158 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22661dfe-b7e1-4894-ae13-dab13e09c845-logs\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.158186 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-config-data-custom\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.158225 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-public-tls-certs\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.158275 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-combined-ca-bundle\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.159189 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22661dfe-b7e1-4894-ae13-dab13e09c845-logs\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.161884 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-internal-tls-certs\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.162379 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-combined-ca-bundle\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.162479 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-public-tls-certs\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.165598 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-config-data\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.166533 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22661dfe-b7e1-4894-ae13-dab13e09c845-config-data-custom\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.173388 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqnm5\" (UniqueName: \"kubernetes.io/projected/22661dfe-b7e1-4894-ae13-dab13e09c845-kube-api-access-tqnm5\") pod \"barbican-api-7cbf4cbf68-zhhj4\" (UID: \"22661dfe-b7e1-4894-ae13-dab13e09c845\") " pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.242179 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78f464b796-86kkm"] Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.250264 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-r57xq"] Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.261341 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-97698dcdb-54zqg"] Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.274536 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.402428 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-76b54949f4-59kjn"] Nov 24 18:06:54 crc kubenswrapper[4768]: W1124 18:06:54.421024 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c2665c_ef67_4325_bad9_7e42cf3195bd.slice/crio-4b354d36d59689c448dc962bd119474f2fd77ea3c5d77789902990d43f454b99 WatchSource:0}: Error finding container 4b354d36d59689c448dc962bd119474f2fd77ea3c5d77789902990d43f454b99: Status 404 returned error can't find the container with id 4b354d36d59689c448dc962bd119474f2fd77ea3c5d77789902990d43f454b99 Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.422412 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56748c45b5-4df84"] Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.744652 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cbf4cbf68-zhhj4"] Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.905111 4768 generic.go:334] "Generic (PLEG): container finished" podID="f9aef2aa-c3bb-4b06-b204-7b557645d5e7" containerID="fa6ffc12ab3dd51bec43cc1beee5e75efae230cef52eab64091f2899a0546936" exitCode=0 Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.905207 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" event={"ID":"f9aef2aa-c3bb-4b06-b204-7b557645d5e7","Type":"ContainerDied","Data":"fa6ffc12ab3dd51bec43cc1beee5e75efae230cef52eab64091f2899a0546936"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.905247 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" event={"ID":"f9aef2aa-c3bb-4b06-b204-7b557645d5e7","Type":"ContainerStarted","Data":"b0f4680f2574057e7f17995c4eb5105fb2d5fbf17434ec6d0741407903d60fb4"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.908221 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76b54949f4-59kjn" event={"ID":"43c2665c-ef67-4325-bad9-7e42cf3195bd","Type":"ContainerStarted","Data":"0e313dc19ce935f2e32394c16bc94e1c3b2e85c46f39f3e8259b7f5e015e27bf"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.908292 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76b54949f4-59kjn" event={"ID":"43c2665c-ef67-4325-bad9-7e42cf3195bd","Type":"ContainerStarted","Data":"4b354d36d59689c448dc962bd119474f2fd77ea3c5d77789902990d43f454b99"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.911093 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b7d468cdf-9fjfm" event={"ID":"b343e1cc-a6b5-4074-98b3-a4bddb9b2730","Type":"ContainerStarted","Data":"0d3896fc63eb7340a7a9c0c33568a6c95d08566c1d424fa975df58a7b052dee1"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.913675 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f464b796-86kkm" event={"ID":"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a","Type":"ContainerStarted","Data":"272c17fed7191dea14fcdb6a22f328b0de03bdb17dafa43bd3875cc56ad60791"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.913713 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f464b796-86kkm" event={"ID":"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a","Type":"ContainerStarted","Data":"37261a81eb64bbe0169c7c58d14f5f8f5d07ff3067818a1ca4518f91b7a1b741"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.920333 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cbf4cbf68-zhhj4" event={"ID":"22661dfe-b7e1-4894-ae13-dab13e09c845","Type":"ContainerStarted","Data":"88df139fd2311e3a9f9ddb5b97860b2e49328ad29059980a4d1308cf5b486153"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.929220 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" event={"ID":"5cb6b015-ae5e-438f-9aec-c25982a2febc","Type":"ContainerStarted","Data":"b134e19beb33cb657a0e4584c7f708ab271595d270b40c76d9b5bbd17583efa0"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.941319 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-844dbf79df-5t2np" event={"ID":"6f9024a7-971e-460c-8b41-157dc2403a44","Type":"ContainerStarted","Data":"5016b49511cdadf451a36b7c6f071adf9bf4fa1eeb98bac810ca0087fb444d3e"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.941393 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-844dbf79df-5t2np" event={"ID":"6f9024a7-971e-460c-8b41-157dc2403a44","Type":"ContainerStarted","Data":"aa94e3d0a044c4b0f217df16c7cde0e8fa21dfb3c791cfe047ce910eed47e388"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.941408 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-844dbf79df-5t2np" event={"ID":"6f9024a7-971e-460c-8b41-157dc2403a44","Type":"ContainerStarted","Data":"2cad2c03a8feb2de7aa0cbe04e1d1337850a2fcebf0be2e9308afa7bf5eda0ec"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.941665 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.945400 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56748c45b5-4df84" event={"ID":"434c7b39-9f1a-4032-b6fb-41c315a3a521","Type":"ContainerStarted","Data":"bbfaa7339b8379b9e87c80cbc3cf16991dff0f4e8a9d2acd63888cfa97e0a57d"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.945434 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56748c45b5-4df84" event={"ID":"434c7b39-9f1a-4032-b6fb-41c315a3a521","Type":"ContainerStarted","Data":"db6a8cee6a1ff1d2d27cc31d69d6fb54e6f0b0c068ec318eeae46307fc9ed2ef"} Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.946001 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:06:54 crc kubenswrapper[4768]: I1124 18:06:54.963305 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-844dbf79df-5t2np" podStartSLOduration=6.963285098 podStartE2EDuration="6.963285098s" podCreationTimestamp="2025-11-24 18:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:54.961167701 +0000 UTC m=+1053.821749478" watchObservedRunningTime="2025-11-24 18:06:54.963285098 +0000 UTC m=+1053.823866875" Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.908258 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe35d6f-6397-4460-8366-07504f40963f" path="/var/lib/kubelet/pods/dfe35d6f-6397-4460-8366-07504f40963f/volumes" Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.954663 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cbf4cbf68-zhhj4" event={"ID":"22661dfe-b7e1-4894-ae13-dab13e09c845","Type":"ContainerStarted","Data":"72bf58bcadd0d5d37125f17b909c952fc5c238a4330be4fa2783807f7133736f"} Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.954710 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cbf4cbf68-zhhj4" event={"ID":"22661dfe-b7e1-4894-ae13-dab13e09c845","Type":"ContainerStarted","Data":"37d25ac4ddf17b3f3b834370b713b07f1f212fd0da1f789577f765e42241fe96"} Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.955117 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.956453 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" event={"ID":"f9aef2aa-c3bb-4b06-b204-7b557645d5e7","Type":"ContainerStarted","Data":"46ef77f4e51087cb9ca5f02b593612c02aa2a6879333adbfae0ca81d3995b927"} Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.956587 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.958417 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76b54949f4-59kjn" event={"ID":"43c2665c-ef67-4325-bad9-7e42cf3195bd","Type":"ContainerStarted","Data":"52dc48aaecd50086944b98a1a54b9c71920efa11c11c1b77cd3b558f1c958258"} Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.958530 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.958612 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.959944 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f464b796-86kkm" event={"ID":"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a","Type":"ContainerStarted","Data":"1e60f33d439d50e4a6d84eaee84109efc8a11f4e41489d3c41be83d4c74923fb"} Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.975845 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-56748c45b5-4df84" podStartSLOduration=4.975820225 podStartE2EDuration="4.975820225s" podCreationTimestamp="2025-11-24 18:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:54.983902275 +0000 UTC m=+1053.844484052" watchObservedRunningTime="2025-11-24 18:06:55.975820225 +0000 UTC m=+1054.836402012" Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.980004 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7cbf4cbf68-zhhj4" podStartSLOduration=2.979984077 podStartE2EDuration="2.979984077s" podCreationTimestamp="2025-11-24 18:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:55.970753031 +0000 UTC m=+1054.831334808" watchObservedRunningTime="2025-11-24 18:06:55.979984077 +0000 UTC m=+1054.840565854" Nov 24 18:06:55 crc kubenswrapper[4768]: I1124 18:06:55.992008 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-76b54949f4-59kjn" podStartSLOduration=4.991978685 podStartE2EDuration="4.991978685s" podCreationTimestamp="2025-11-24 18:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:55.986380566 +0000 UTC m=+1054.846962343" watchObservedRunningTime="2025-11-24 18:06:55.991978685 +0000 UTC m=+1054.852560462" Nov 24 18:06:56 crc kubenswrapper[4768]: I1124 18:06:56.007470 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78f464b796-86kkm" podStartSLOduration=5.007442306 podStartE2EDuration="5.007442306s" podCreationTimestamp="2025-11-24 18:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:56.004841427 +0000 UTC m=+1054.865423224" watchObservedRunningTime="2025-11-24 18:06:56.007442306 +0000 UTC m=+1054.868024083" Nov 24 18:06:56 crc kubenswrapper[4768]: I1124 18:06:56.023857 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" podStartSLOduration=5.023832511 podStartE2EDuration="5.023832511s" podCreationTimestamp="2025-11-24 18:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:06:56.019917958 +0000 UTC m=+1054.880499735" watchObservedRunningTime="2025-11-24 18:06:56.023832511 +0000 UTC m=+1054.884414288" Nov 24 18:06:56 crc kubenswrapper[4768]: I1124 18:06:56.802241 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:56 crc kubenswrapper[4768]: I1124 18:06:56.802701 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:06:56 crc kubenswrapper[4768]: I1124 18:06:56.971706 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:06:57 crc kubenswrapper[4768]: I1124 18:06:57.985416 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b7d468cdf-9fjfm" event={"ID":"b343e1cc-a6b5-4074-98b3-a4bddb9b2730","Type":"ContainerStarted","Data":"21be6ce792f91125fc344930d1a1c0eff534759bd8bbdd60e7f651b948ed596d"} Nov 24 18:06:57 crc kubenswrapper[4768]: I1124 18:06:57.986202 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b7d468cdf-9fjfm" event={"ID":"b343e1cc-a6b5-4074-98b3-a4bddb9b2730","Type":"ContainerStarted","Data":"7c79cc3fcef7b5a323c7430e60fc47197c39de40d5fafde7aae494bc580c2101"} Nov 24 18:06:57 crc kubenswrapper[4768]: I1124 18:06:57.988393 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" event={"ID":"5cb6b015-ae5e-438f-9aec-c25982a2febc","Type":"ContainerStarted","Data":"0b27cd7239dc33b420c5d11d8cd02753c18f947e078b742f37616fbbfa742f37"} Nov 24 18:06:57 crc kubenswrapper[4768]: I1124 18:06:57.988459 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" event={"ID":"5cb6b015-ae5e-438f-9aec-c25982a2febc","Type":"ContainerStarted","Data":"4b5cc06335b08213971a5ca3e053ac88f9e179f3c6f493ea1e677c58285d4322"} Nov 24 18:06:58 crc kubenswrapper[4768]: I1124 18:06:58.009435 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-b7d468cdf-9fjfm" podStartSLOduration=3.624616558 podStartE2EDuration="7.009410108s" podCreationTimestamp="2025-11-24 18:06:51 +0000 UTC" firstStartedPulling="2025-11-24 18:06:54.067757839 +0000 UTC m=+1052.928339616" lastFinishedPulling="2025-11-24 18:06:57.452551389 +0000 UTC m=+1056.313133166" observedRunningTime="2025-11-24 18:06:57.998856987 +0000 UTC m=+1056.859438764" watchObservedRunningTime="2025-11-24 18:06:58.009410108 +0000 UTC m=+1056.869991885" Nov 24 18:06:58 crc kubenswrapper[4768]: I1124 18:06:58.029972 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-97698dcdb-54zqg" podStartSLOduration=3.831594149 podStartE2EDuration="7.029950414s" podCreationTimestamp="2025-11-24 18:06:51 +0000 UTC" firstStartedPulling="2025-11-24 18:06:54.262809993 +0000 UTC m=+1053.123391770" lastFinishedPulling="2025-11-24 18:06:57.461166238 +0000 UTC m=+1056.321748035" observedRunningTime="2025-11-24 18:06:58.020807321 +0000 UTC m=+1056.881389098" watchObservedRunningTime="2025-11-24 18:06:58.029950414 +0000 UTC m=+1056.890532191" Nov 24 18:07:00 crc kubenswrapper[4768]: I1124 18:07:00.020315 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wpggd" event={"ID":"8ed13008-e82b-40d6-af72-abfb5a1223fb","Type":"ContainerStarted","Data":"0096f2a42ff3dfb6df3add48a790ac51d9369bb687b80fcb0267a293bb8cd248"} Nov 24 18:07:00 crc kubenswrapper[4768]: I1124 18:07:00.054275 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wpggd" podStartSLOduration=2.87051278 podStartE2EDuration="40.05425177s" podCreationTimestamp="2025-11-24 18:06:20 +0000 UTC" firstStartedPulling="2025-11-24 18:06:21.158618646 +0000 UTC m=+1020.019200423" lastFinishedPulling="2025-11-24 18:06:58.342357636 +0000 UTC m=+1057.202939413" observedRunningTime="2025-11-24 18:07:00.044680206 +0000 UTC m=+1058.905262093" watchObservedRunningTime="2025-11-24 18:07:00.05425177 +0000 UTC m=+1058.914833547" Nov 24 18:07:00 crc kubenswrapper[4768]: I1124 18:07:00.929240 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:07:01 crc kubenswrapper[4768]: I1124 18:07:01.791885 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:07:01 crc kubenswrapper[4768]: I1124 18:07:01.867366 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vm92p"] Nov 24 18:07:01 crc kubenswrapper[4768]: I1124 18:07:01.867653 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" podUID="175cddfa-c51a-40dc-be36-97af7e8b7cc2" containerName="dnsmasq-dns" containerID="cri-o://8420e7989603046d91339c3f4e7d49d3d212580a5589625f3b76a80ffa791ad4" gracePeriod=10 Nov 24 18:07:02 crc kubenswrapper[4768]: I1124 18:07:02.064613 4768 generic.go:334] "Generic (PLEG): container finished" podID="175cddfa-c51a-40dc-be36-97af7e8b7cc2" containerID="8420e7989603046d91339c3f4e7d49d3d212580a5589625f3b76a80ffa791ad4" exitCode=0 Nov 24 18:07:02 crc kubenswrapper[4768]: I1124 18:07:02.065028 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" event={"ID":"175cddfa-c51a-40dc-be36-97af7e8b7cc2","Type":"ContainerDied","Data":"8420e7989603046d91339c3f4e7d49d3d212580a5589625f3b76a80ffa791ad4"} Nov 24 18:07:02 crc kubenswrapper[4768]: I1124 18:07:02.620980 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cbf4cbf68-zhhj4" Nov 24 18:07:02 crc kubenswrapper[4768]: I1124 18:07:02.685939 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78f464b796-86kkm"] Nov 24 18:07:02 crc kubenswrapper[4768]: I1124 18:07:02.686254 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78f464b796-86kkm" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api-log" containerID="cri-o://272c17fed7191dea14fcdb6a22f328b0de03bdb17dafa43bd3875cc56ad60791" gracePeriod=30 Nov 24 18:07:02 crc kubenswrapper[4768]: I1124 18:07:02.686373 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78f464b796-86kkm" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api" containerID="cri-o://1e60f33d439d50e4a6d84eaee84109efc8a11f4e41489d3c41be83d4c74923fb" gracePeriod=30 Nov 24 18:07:02 crc kubenswrapper[4768]: I1124 18:07:02.696239 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f464b796-86kkm" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": EOF" Nov 24 18:07:02 crc kubenswrapper[4768]: I1124 18:07:02.696320 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f464b796-86kkm" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": EOF" Nov 24 18:07:03 crc kubenswrapper[4768]: I1124 18:07:03.078963 4768 generic.go:334] "Generic (PLEG): container finished" podID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerID="272c17fed7191dea14fcdb6a22f328b0de03bdb17dafa43bd3875cc56ad60791" exitCode=143 Nov 24 18:07:03 crc kubenswrapper[4768]: I1124 18:07:03.079016 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f464b796-86kkm" event={"ID":"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a","Type":"ContainerDied","Data":"272c17fed7191dea14fcdb6a22f328b0de03bdb17dafa43bd3875cc56ad60791"} Nov 24 18:07:04 crc kubenswrapper[4768]: I1124 18:07:04.993686 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:07:05 crc kubenswrapper[4768]: E1124 18:07:05.086928 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.096407 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0b8cf78-9bbe-44cd-8907-78fd9548d712","Type":"ContainerStarted","Data":"1fbb8e53a79f7088c31c3555ebbfaba7165322e4e5bee316942049807b77bd08"} Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.096636 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="ceilometer-notification-agent" containerID="cri-o://756c8026532e94696fa6b9fa0598cd4361a90365b3db883f9864b4341d9ed87d" gracePeriod=30 Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.096972 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.097310 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="proxy-httpd" containerID="cri-o://1fbb8e53a79f7088c31c3555ebbfaba7165322e4e5bee316942049807b77bd08" gracePeriod=30 Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.097348 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="sg-core" containerID="cri-o://9b37a8519bd267bcba127733196da828e65626303691e6ce5e84b3d746b30ea9" gracePeriod=30 Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.102584 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" event={"ID":"175cddfa-c51a-40dc-be36-97af7e8b7cc2","Type":"ContainerDied","Data":"e060cc0d270527edb5d637afacb3a12d6ae236a90a554cfb21d8fd5d7914dba8"} Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.102635 4768 scope.go:117] "RemoveContainer" containerID="8420e7989603046d91339c3f4e7d49d3d212580a5589625f3b76a80ffa791ad4" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.102716 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-vm92p" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.105702 4768 generic.go:334] "Generic (PLEG): container finished" podID="8ed13008-e82b-40d6-af72-abfb5a1223fb" containerID="0096f2a42ff3dfb6df3add48a790ac51d9369bb687b80fcb0267a293bb8cd248" exitCode=0 Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.105743 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wpggd" event={"ID":"8ed13008-e82b-40d6-af72-abfb5a1223fb","Type":"ContainerDied","Data":"0096f2a42ff3dfb6df3add48a790ac51d9369bb687b80fcb0267a293bb8cd248"} Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.109877 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-dns-svc\") pod \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.109939 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfxsn\" (UniqueName: \"kubernetes.io/projected/175cddfa-c51a-40dc-be36-97af7e8b7cc2-kube-api-access-hfxsn\") pod \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.109961 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-ovsdbserver-nb\") pod \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.110075 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-config\") pod \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.110166 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-ovsdbserver-sb\") pod \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\" (UID: \"175cddfa-c51a-40dc-be36-97af7e8b7cc2\") " Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.114678 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175cddfa-c51a-40dc-be36-97af7e8b7cc2-kube-api-access-hfxsn" (OuterVolumeSpecName: "kube-api-access-hfxsn") pod "175cddfa-c51a-40dc-be36-97af7e8b7cc2" (UID: "175cddfa-c51a-40dc-be36-97af7e8b7cc2"). InnerVolumeSpecName "kube-api-access-hfxsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.130158 4768 scope.go:117] "RemoveContainer" containerID="78946e99e25b3b2e600aad0bdb4090af4ee0b5f3d3c69f63cd5cc8d6a0ec8c42" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.159329 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "175cddfa-c51a-40dc-be36-97af7e8b7cc2" (UID: "175cddfa-c51a-40dc-be36-97af7e8b7cc2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.164875 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "175cddfa-c51a-40dc-be36-97af7e8b7cc2" (UID: "175cddfa-c51a-40dc-be36-97af7e8b7cc2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.166431 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "175cddfa-c51a-40dc-be36-97af7e8b7cc2" (UID: "175cddfa-c51a-40dc-be36-97af7e8b7cc2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.178093 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-config" (OuterVolumeSpecName: "config") pod "175cddfa-c51a-40dc-be36-97af7e8b7cc2" (UID: "175cddfa-c51a-40dc-be36-97af7e8b7cc2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.213295 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.213455 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.213469 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.213522 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfxsn\" (UniqueName: \"kubernetes.io/projected/175cddfa-c51a-40dc-be36-97af7e8b7cc2-kube-api-access-hfxsn\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.213541 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/175cddfa-c51a-40dc-be36-97af7e8b7cc2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.442878 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vm92p"] Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.452624 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vm92p"] Nov 24 18:07:05 crc kubenswrapper[4768]: I1124 18:07:05.932803 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175cddfa-c51a-40dc-be36-97af7e8b7cc2" path="/var/lib/kubelet/pods/175cddfa-c51a-40dc-be36-97af7e8b7cc2/volumes" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.122522 4768 generic.go:334] "Generic (PLEG): container finished" podID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerID="9b37a8519bd267bcba127733196da828e65626303691e6ce5e84b3d746b30ea9" exitCode=2 Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.122605 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0b8cf78-9bbe-44cd-8907-78fd9548d712","Type":"ContainerDied","Data":"9b37a8519bd267bcba127733196da828e65626303691e6ce5e84b3d746b30ea9"} Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.449281 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wpggd" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.545642 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-db-sync-config-data\") pod \"8ed13008-e82b-40d6-af72-abfb5a1223fb\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.545744 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-combined-ca-bundle\") pod \"8ed13008-e82b-40d6-af72-abfb5a1223fb\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.545788 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-config-data\") pod \"8ed13008-e82b-40d6-af72-abfb5a1223fb\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.545817 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-scripts\") pod \"8ed13008-e82b-40d6-af72-abfb5a1223fb\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.545894 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtdfn\" (UniqueName: \"kubernetes.io/projected/8ed13008-e82b-40d6-af72-abfb5a1223fb-kube-api-access-xtdfn\") pod \"8ed13008-e82b-40d6-af72-abfb5a1223fb\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.545964 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed13008-e82b-40d6-af72-abfb5a1223fb-etc-machine-id\") pod \"8ed13008-e82b-40d6-af72-abfb5a1223fb\" (UID: \"8ed13008-e82b-40d6-af72-abfb5a1223fb\") " Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.546275 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ed13008-e82b-40d6-af72-abfb5a1223fb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8ed13008-e82b-40d6-af72-abfb5a1223fb" (UID: "8ed13008-e82b-40d6-af72-abfb5a1223fb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.551407 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ed13008-e82b-40d6-af72-abfb5a1223fb-kube-api-access-xtdfn" (OuterVolumeSpecName: "kube-api-access-xtdfn") pod "8ed13008-e82b-40d6-af72-abfb5a1223fb" (UID: "8ed13008-e82b-40d6-af72-abfb5a1223fb"). InnerVolumeSpecName "kube-api-access-xtdfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.552580 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-scripts" (OuterVolumeSpecName: "scripts") pod "8ed13008-e82b-40d6-af72-abfb5a1223fb" (UID: "8ed13008-e82b-40d6-af72-abfb5a1223fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.552840 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8ed13008-e82b-40d6-af72-abfb5a1223fb" (UID: "8ed13008-e82b-40d6-af72-abfb5a1223fb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.576804 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ed13008-e82b-40d6-af72-abfb5a1223fb" (UID: "8ed13008-e82b-40d6-af72-abfb5a1223fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.598138 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-config-data" (OuterVolumeSpecName: "config-data") pod "8ed13008-e82b-40d6-af72-abfb5a1223fb" (UID: "8ed13008-e82b-40d6-af72-abfb5a1223fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.648329 4768 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.648384 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.648397 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.648409 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed13008-e82b-40d6-af72-abfb5a1223fb-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.648419 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtdfn\" (UniqueName: \"kubernetes.io/projected/8ed13008-e82b-40d6-af72-abfb5a1223fb-kube-api-access-xtdfn\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:06 crc kubenswrapper[4768]: I1124 18:07:06.648430 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed13008-e82b-40d6-af72-abfb5a1223fb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.081288 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f464b796-86kkm" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": read tcp 10.217.0.2:42390->10.217.0.149:9311: read: connection reset by peer" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.081350 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f464b796-86kkm" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": read tcp 10.217.0.2:42404->10.217.0.149:9311: read: connection reset by peer" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.081816 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f464b796-86kkm" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": dial tcp 10.217.0.149:9311: connect: connection refused" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.081807 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f464b796-86kkm" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": dial tcp 10.217.0.149:9311: connect: connection refused" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.135345 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wpggd" event={"ID":"8ed13008-e82b-40d6-af72-abfb5a1223fb","Type":"ContainerDied","Data":"58ed14d30adcb38bc39b50e9f93da38a0c6c78603ec940812c9cb0d76b286332"} Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.135423 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58ed14d30adcb38bc39b50e9f93da38a0c6c78603ec940812c9cb0d76b286332" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.135451 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wpggd" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.138210 4768 generic.go:334] "Generic (PLEG): container finished" podID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerID="1e60f33d439d50e4a6d84eaee84109efc8a11f4e41489d3c41be83d4c74923fb" exitCode=0 Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.138265 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f464b796-86kkm" event={"ID":"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a","Type":"ContainerDied","Data":"1e60f33d439d50e4a6d84eaee84109efc8a11f4e41489d3c41be83d4c74923fb"} Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.358092 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.439267 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 18:07:07 crc kubenswrapper[4768]: E1124 18:07:07.439653 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api-log" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.439672 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api-log" Nov 24 18:07:07 crc kubenswrapper[4768]: E1124 18:07:07.439685 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175cddfa-c51a-40dc-be36-97af7e8b7cc2" containerName="init" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.439691 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="175cddfa-c51a-40dc-be36-97af7e8b7cc2" containerName="init" Nov 24 18:07:07 crc kubenswrapper[4768]: E1124 18:07:07.439707 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed13008-e82b-40d6-af72-abfb5a1223fb" containerName="cinder-db-sync" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.439715 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed13008-e82b-40d6-af72-abfb5a1223fb" containerName="cinder-db-sync" Nov 24 18:07:07 crc kubenswrapper[4768]: E1124 18:07:07.439733 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.439740 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api" Nov 24 18:07:07 crc kubenswrapper[4768]: E1124 18:07:07.439751 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175cddfa-c51a-40dc-be36-97af7e8b7cc2" containerName="dnsmasq-dns" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.439757 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="175cddfa-c51a-40dc-be36-97af7e8b7cc2" containerName="dnsmasq-dns" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.439939 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api-log" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.439958 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" containerName="barbican-api" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.439966 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed13008-e82b-40d6-af72-abfb5a1223fb" containerName="cinder-db-sync" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.439988 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="175cddfa-c51a-40dc-be36-97af7e8b7cc2" containerName="dnsmasq-dns" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.440902 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.451865 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nrcgw" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.452170 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.452342 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.452571 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.462464 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-config-data\") pod \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.462534 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-config-data-custom\") pod \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.462626 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-combined-ca-bundle\") pod \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.462709 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-logs\") pod \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.462755 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.462811 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl8wr\" (UniqueName: \"kubernetes.io/projected/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-kube-api-access-xl8wr\") pod \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\" (UID: \"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a\") " Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.469733 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-kube-api-access-xl8wr" (OuterVolumeSpecName: "kube-api-access-xl8wr") pod "f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" (UID: "f014fd67-6d4a-4bdd-a711-a4023cc1ff3a"). InnerVolumeSpecName "kube-api-access-xl8wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.470740 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-logs" (OuterVolumeSpecName: "logs") pod "f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" (UID: "f014fd67-6d4a-4bdd-a711-a4023cc1ff3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.474555 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" (UID: "f014fd67-6d4a-4bdd-a711-a4023cc1ff3a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.507440 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" (UID: "f014fd67-6d4a-4bdd-a711-a4023cc1ff3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.530644 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-tjjzp"] Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.532469 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.554177 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-config-data" (OuterVolumeSpecName: "config-data") pod "f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" (UID: "f014fd67-6d4a-4bdd-a711-a4023cc1ff3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.555017 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-tjjzp"] Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.564809 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-scripts\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.564884 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce13a66a-1a9b-490b-8263-376c7e7a86d0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.564915 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgpzc\" (UniqueName: \"kubernetes.io/projected/ce13a66a-1a9b-490b-8263-376c7e7a86d0-kube-api-access-mgpzc\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.564959 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.564985 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.565089 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-config-data\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.565503 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.565551 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.565569 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.565582 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.565594 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl8wr\" (UniqueName: \"kubernetes.io/projected/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a-kube-api-access-xl8wr\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.629140 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.631415 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.641428 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.641752 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.667518 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-config-data\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.667599 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr2pc\" (UniqueName: \"kubernetes.io/projected/3650ec4f-2853-4822-a5ee-47b1b642fdbd-kube-api-access-rr2pc\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.667683 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-scripts\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.667721 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce13a66a-1a9b-490b-8263-376c7e7a86d0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.667738 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgpzc\" (UniqueName: \"kubernetes.io/projected/ce13a66a-1a9b-490b-8263-376c7e7a86d0-kube-api-access-mgpzc\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.667759 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-config\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.667786 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.667803 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.667844 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.667893 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.667912 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.673326 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce13a66a-1a9b-490b-8263-376c7e7a86d0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.674164 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-scripts\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.676035 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-config-data\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.677960 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.685891 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.697794 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgpzc\" (UniqueName: \"kubernetes.io/projected/ce13a66a-1a9b-490b-8263-376c7e7a86d0-kube-api-access-mgpzc\") pod \"cinder-scheduler-0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.769647 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.769711 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.769778 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-config-data\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.769822 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr2pc\" (UniqueName: \"kubernetes.io/projected/3650ec4f-2853-4822-a5ee-47b1b642fdbd-kube-api-access-rr2pc\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.769862 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.769883 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwz5b\" (UniqueName: \"kubernetes.io/projected/2e955ebb-07b3-4997-b373-7e39827a2d90-kube-api-access-kwz5b\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.769928 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e955ebb-07b3-4997-b373-7e39827a2d90-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.769952 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-scripts\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.769981 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.770008 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-config\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.770058 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e955ebb-07b3-4997-b373-7e39827a2d90-logs\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.770104 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.770883 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.771079 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-config\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.771517 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.772278 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.787379 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr2pc\" (UniqueName: \"kubernetes.io/projected/3650ec4f-2853-4822-a5ee-47b1b642fdbd-kube-api-access-rr2pc\") pod \"dnsmasq-dns-6d97fcdd8f-tjjzp\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.846356 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.871968 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-config-data\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.873555 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.873595 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwz5b\" (UniqueName: \"kubernetes.io/projected/2e955ebb-07b3-4997-b373-7e39827a2d90-kube-api-access-kwz5b\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.873638 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e955ebb-07b3-4997-b373-7e39827a2d90-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.873660 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-scripts\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.873685 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.873724 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e955ebb-07b3-4997-b373-7e39827a2d90-logs\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.874138 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e955ebb-07b3-4997-b373-7e39827a2d90-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.874163 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e955ebb-07b3-4997-b373-7e39827a2d90-logs\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.876575 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-config-data\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.877008 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.877889 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-scripts\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.879594 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.893010 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwz5b\" (UniqueName: \"kubernetes.io/projected/2e955ebb-07b3-4997-b373-7e39827a2d90-kube-api-access-kwz5b\") pod \"cinder-api-0\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " pod="openstack/cinder-api-0" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.950353 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:07 crc kubenswrapper[4768]: I1124 18:07:07.963975 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 18:07:08 crc kubenswrapper[4768]: I1124 18:07:08.167518 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f464b796-86kkm" event={"ID":"f014fd67-6d4a-4bdd-a711-a4023cc1ff3a","Type":"ContainerDied","Data":"37261a81eb64bbe0169c7c58d14f5f8f5d07ff3067818a1ca4518f91b7a1b741"} Nov 24 18:07:08 crc kubenswrapper[4768]: I1124 18:07:08.167579 4768 scope.go:117] "RemoveContainer" containerID="1e60f33d439d50e4a6d84eaee84109efc8a11f4e41489d3c41be83d4c74923fb" Nov 24 18:07:08 crc kubenswrapper[4768]: I1124 18:07:08.167901 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f464b796-86kkm" Nov 24 18:07:08 crc kubenswrapper[4768]: I1124 18:07:08.179006 4768 generic.go:334] "Generic (PLEG): container finished" podID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerID="756c8026532e94696fa6b9fa0598cd4361a90365b3db883f9864b4341d9ed87d" exitCode=0 Nov 24 18:07:08 crc kubenswrapper[4768]: I1124 18:07:08.179049 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0b8cf78-9bbe-44cd-8907-78fd9548d712","Type":"ContainerDied","Data":"756c8026532e94696fa6b9fa0598cd4361a90365b3db883f9864b4341d9ed87d"} Nov 24 18:07:08 crc kubenswrapper[4768]: I1124 18:07:08.199705 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78f464b796-86kkm"] Nov 24 18:07:08 crc kubenswrapper[4768]: I1124 18:07:08.200809 4768 scope.go:117] "RemoveContainer" containerID="272c17fed7191dea14fcdb6a22f328b0de03bdb17dafa43bd3875cc56ad60791" Nov 24 18:07:08 crc kubenswrapper[4768]: I1124 18:07:08.210566 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78f464b796-86kkm"] Nov 24 18:07:08 crc kubenswrapper[4768]: I1124 18:07:08.320269 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 18:07:08 crc kubenswrapper[4768]: W1124 18:07:08.320471 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce13a66a_1a9b_490b_8263_376c7e7a86d0.slice/crio-fd9e350c535d79c1eeaffd7d08cb81b40abf7939f7719efa9646d662c9cdcd0c WatchSource:0}: Error finding container fd9e350c535d79c1eeaffd7d08cb81b40abf7939f7719efa9646d662c9cdcd0c: Status 404 returned error can't find the container with id fd9e350c535d79c1eeaffd7d08cb81b40abf7939f7719efa9646d662c9cdcd0c Nov 24 18:07:08 crc kubenswrapper[4768]: I1124 18:07:08.463410 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-tjjzp"] Nov 24 18:07:08 crc kubenswrapper[4768]: I1124 18:07:08.552795 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 18:07:08 crc kubenswrapper[4768]: W1124 18:07:08.554473 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e955ebb_07b3_4997_b373_7e39827a2d90.slice/crio-2760831b1e1b556db40c42c360eddc0d51ea77f4a9bdce922243fc396cdf4122 WatchSource:0}: Error finding container 2760831b1e1b556db40c42c360eddc0d51ea77f4a9bdce922243fc396cdf4122: Status 404 returned error can't find the container with id 2760831b1e1b556db40c42c360eddc0d51ea77f4a9bdce922243fc396cdf4122 Nov 24 18:07:09 crc kubenswrapper[4768]: I1124 18:07:09.205984 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ce13a66a-1a9b-490b-8263-376c7e7a86d0","Type":"ContainerStarted","Data":"fd9e350c535d79c1eeaffd7d08cb81b40abf7939f7719efa9646d662c9cdcd0c"} Nov 24 18:07:09 crc kubenswrapper[4768]: I1124 18:07:09.209083 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e955ebb-07b3-4997-b373-7e39827a2d90","Type":"ContainerStarted","Data":"6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49"} Nov 24 18:07:09 crc kubenswrapper[4768]: I1124 18:07:09.209132 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e955ebb-07b3-4997-b373-7e39827a2d90","Type":"ContainerStarted","Data":"2760831b1e1b556db40c42c360eddc0d51ea77f4a9bdce922243fc396cdf4122"} Nov 24 18:07:09 crc kubenswrapper[4768]: I1124 18:07:09.211424 4768 generic.go:334] "Generic (PLEG): container finished" podID="3650ec4f-2853-4822-a5ee-47b1b642fdbd" containerID="808857ce12e0fd0c5bbe9655b2f80034383d24b1e84c03ab1a14c2941fa4caab" exitCode=0 Nov 24 18:07:09 crc kubenswrapper[4768]: I1124 18:07:09.211467 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" event={"ID":"3650ec4f-2853-4822-a5ee-47b1b642fdbd","Type":"ContainerDied","Data":"808857ce12e0fd0c5bbe9655b2f80034383d24b1e84c03ab1a14c2941fa4caab"} Nov 24 18:07:09 crc kubenswrapper[4768]: I1124 18:07:09.211504 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" event={"ID":"3650ec4f-2853-4822-a5ee-47b1b642fdbd","Type":"ContainerStarted","Data":"f5c5048976d0c8aa8b32d95ad38df457543a10993ba95eab2d305da58659be17"} Nov 24 18:07:09 crc kubenswrapper[4768]: I1124 18:07:09.912039 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f014fd67-6d4a-4bdd-a711-a4023cc1ff3a" path="/var/lib/kubelet/pods/f014fd67-6d4a-4bdd-a711-a4023cc1ff3a/volumes" Nov 24 18:07:09 crc kubenswrapper[4768]: I1124 18:07:09.937457 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 18:07:10 crc kubenswrapper[4768]: I1124 18:07:10.227911 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e955ebb-07b3-4997-b373-7e39827a2d90","Type":"ContainerStarted","Data":"e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633"} Nov 24 18:07:10 crc kubenswrapper[4768]: I1124 18:07:10.228011 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2e955ebb-07b3-4997-b373-7e39827a2d90" containerName="cinder-api-log" containerID="cri-o://6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49" gracePeriod=30 Nov 24 18:07:10 crc kubenswrapper[4768]: I1124 18:07:10.228113 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2e955ebb-07b3-4997-b373-7e39827a2d90" containerName="cinder-api" containerID="cri-o://e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633" gracePeriod=30 Nov 24 18:07:10 crc kubenswrapper[4768]: I1124 18:07:10.228240 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 18:07:10 crc kubenswrapper[4768]: I1124 18:07:10.235236 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" event={"ID":"3650ec4f-2853-4822-a5ee-47b1b642fdbd","Type":"ContainerStarted","Data":"2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880"} Nov 24 18:07:10 crc kubenswrapper[4768]: I1124 18:07:10.235687 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:10 crc kubenswrapper[4768]: I1124 18:07:10.238420 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ce13a66a-1a9b-490b-8263-376c7e7a86d0","Type":"ContainerStarted","Data":"e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092"} Nov 24 18:07:10 crc kubenswrapper[4768]: I1124 18:07:10.275941 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" podStartSLOduration=3.275921031 podStartE2EDuration="3.275921031s" podCreationTimestamp="2025-11-24 18:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:07:10.26912276 +0000 UTC m=+1069.129704537" watchObservedRunningTime="2025-11-24 18:07:10.275921031 +0000 UTC m=+1069.136502808" Nov 24 18:07:10 crc kubenswrapper[4768]: I1124 18:07:10.280071 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.280051931 podStartE2EDuration="3.280051931s" podCreationTimestamp="2025-11-24 18:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:07:10.251916363 +0000 UTC m=+1069.112498130" watchObservedRunningTime="2025-11-24 18:07:10.280051931 +0000 UTC m=+1069.140633708" Nov 24 18:07:11 crc kubenswrapper[4768]: I1124 18:07:11.250733 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ce13a66a-1a9b-490b-8263-376c7e7a86d0","Type":"ContainerStarted","Data":"e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b"} Nov 24 18:07:11 crc kubenswrapper[4768]: I1124 18:07:11.253171 4768 generic.go:334] "Generic (PLEG): container finished" podID="2e955ebb-07b3-4997-b373-7e39827a2d90" containerID="6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49" exitCode=143 Nov 24 18:07:11 crc kubenswrapper[4768]: I1124 18:07:11.253246 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e955ebb-07b3-4997-b373-7e39827a2d90","Type":"ContainerDied","Data":"6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49"} Nov 24 18:07:11 crc kubenswrapper[4768]: I1124 18:07:11.271932 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.4859031209999998 podStartE2EDuration="4.271874228s" podCreationTimestamp="2025-11-24 18:07:07 +0000 UTC" firstStartedPulling="2025-11-24 18:07:08.322572311 +0000 UTC m=+1067.183154088" lastFinishedPulling="2025-11-24 18:07:09.108543418 +0000 UTC m=+1067.969125195" observedRunningTime="2025-11-24 18:07:11.267977245 +0000 UTC m=+1070.128559042" watchObservedRunningTime="2025-11-24 18:07:11.271874228 +0000 UTC m=+1070.132456005" Nov 24 18:07:12 crc kubenswrapper[4768]: I1124 18:07:12.846829 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 18:07:16 crc kubenswrapper[4768]: I1124 18:07:16.402156 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:07:17 crc kubenswrapper[4768]: I1124 18:07:17.952741 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.013531 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-r57xq"] Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.013770 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" podUID="f9aef2aa-c3bb-4b06-b204-7b557645d5e7" containerName="dnsmasq-dns" containerID="cri-o://46ef77f4e51087cb9ca5f02b593612c02aa2a6879333adbfae0ca81d3995b927" gracePeriod=10 Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.106785 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 18:07:18 crc kubenswrapper[4768]: E1124 18:07:18.107776 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9aef2aa_c3bb_4b06_b204_7b557645d5e7.slice/crio-conmon-46ef77f4e51087cb9ca5f02b593612c02aa2a6879333adbfae0ca81d3995b927.scope\": RecentStats: unable to find data in memory cache]" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.168902 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.334000 4768 generic.go:334] "Generic (PLEG): container finished" podID="f9aef2aa-c3bb-4b06-b204-7b557645d5e7" containerID="46ef77f4e51087cb9ca5f02b593612c02aa2a6879333adbfae0ca81d3995b927" exitCode=0 Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.334123 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" event={"ID":"f9aef2aa-c3bb-4b06-b204-7b557645d5e7","Type":"ContainerDied","Data":"46ef77f4e51087cb9ca5f02b593612c02aa2a6879333adbfae0ca81d3995b927"} Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.334237 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ce13a66a-1a9b-490b-8263-376c7e7a86d0" containerName="cinder-scheduler" containerID="cri-o://e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092" gracePeriod=30 Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.334353 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ce13a66a-1a9b-490b-8263-376c7e7a86d0" containerName="probe" containerID="cri-o://e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b" gracePeriod=30 Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.503349 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.608881 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-config\") pod \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.609013 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-dns-svc\") pod \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.609059 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tmj2\" (UniqueName: \"kubernetes.io/projected/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-kube-api-access-6tmj2\") pod \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.609103 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-ovsdbserver-sb\") pod \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.609134 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-ovsdbserver-nb\") pod \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\" (UID: \"f9aef2aa-c3bb-4b06-b204-7b557645d5e7\") " Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.615375 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-kube-api-access-6tmj2" (OuterVolumeSpecName: "kube-api-access-6tmj2") pod "f9aef2aa-c3bb-4b06-b204-7b557645d5e7" (UID: "f9aef2aa-c3bb-4b06-b204-7b557645d5e7"). InnerVolumeSpecName "kube-api-access-6tmj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.655753 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f9aef2aa-c3bb-4b06-b204-7b557645d5e7" (UID: "f9aef2aa-c3bb-4b06-b204-7b557645d5e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.660251 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f9aef2aa-c3bb-4b06-b204-7b557645d5e7" (UID: "f9aef2aa-c3bb-4b06-b204-7b557645d5e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.667042 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-config" (OuterVolumeSpecName: "config") pod "f9aef2aa-c3bb-4b06-b204-7b557645d5e7" (UID: "f9aef2aa-c3bb-4b06-b204-7b557645d5e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.667851 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9aef2aa-c3bb-4b06-b204-7b557645d5e7" (UID: "f9aef2aa-c3bb-4b06-b204-7b557645d5e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.689095 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-844dbf79df-5t2np" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.711655 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.711698 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.711711 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tmj2\" (UniqueName: \"kubernetes.io/projected/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-kube-api-access-6tmj2\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.711726 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.711738 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9aef2aa-c3bb-4b06-b204-7b557645d5e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.747571 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c9cf6cc78-ssqjz"] Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.747792 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c9cf6cc78-ssqjz" podUID="a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" containerName="neutron-api" containerID="cri-o://3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd" gracePeriod=30 Nov 24 18:07:18 crc kubenswrapper[4768]: I1124 18:07:18.747944 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c9cf6cc78-ssqjz" podUID="a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" containerName="neutron-httpd" containerID="cri-o://9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87" gracePeriod=30 Nov 24 18:07:19 crc kubenswrapper[4768]: I1124 18:07:19.344110 4768 generic.go:334] "Generic (PLEG): container finished" podID="ce13a66a-1a9b-490b-8263-376c7e7a86d0" containerID="e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b" exitCode=0 Nov 24 18:07:19 crc kubenswrapper[4768]: I1124 18:07:19.344333 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ce13a66a-1a9b-490b-8263-376c7e7a86d0","Type":"ContainerDied","Data":"e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b"} Nov 24 18:07:19 crc kubenswrapper[4768]: I1124 18:07:19.346149 4768 generic.go:334] "Generic (PLEG): container finished" podID="a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" containerID="9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87" exitCode=0 Nov 24 18:07:19 crc kubenswrapper[4768]: I1124 18:07:19.346224 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c9cf6cc78-ssqjz" event={"ID":"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2","Type":"ContainerDied","Data":"9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87"} Nov 24 18:07:19 crc kubenswrapper[4768]: I1124 18:07:19.348254 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" event={"ID":"f9aef2aa-c3bb-4b06-b204-7b557645d5e7","Type":"ContainerDied","Data":"b0f4680f2574057e7f17995c4eb5105fb2d5fbf17434ec6d0741407903d60fb4"} Nov 24 18:07:19 crc kubenswrapper[4768]: I1124 18:07:19.348317 4768 scope.go:117] "RemoveContainer" containerID="46ef77f4e51087cb9ca5f02b593612c02aa2a6879333adbfae0ca81d3995b927" Nov 24 18:07:19 crc kubenswrapper[4768]: I1124 18:07:19.348381 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-r57xq" Nov 24 18:07:19 crc kubenswrapper[4768]: I1124 18:07:19.370335 4768 scope.go:117] "RemoveContainer" containerID="fa6ffc12ab3dd51bec43cc1beee5e75efae230cef52eab64091f2899a0546936" Nov 24 18:07:19 crc kubenswrapper[4768]: I1124 18:07:19.389722 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-r57xq"] Nov 24 18:07:19 crc kubenswrapper[4768]: I1124 18:07:19.402431 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-r57xq"] Nov 24 18:07:19 crc kubenswrapper[4768]: I1124 18:07:19.909949 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9aef2aa-c3bb-4b06-b204-7b557645d5e7" path="/var/lib/kubelet/pods/f9aef2aa-c3bb-4b06-b204-7b557645d5e7/volumes" Nov 24 18:07:20 crc kubenswrapper[4768]: I1124 18:07:20.137992 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 18:07:20 crc kubenswrapper[4768]: I1124 18:07:20.765563 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 18:07:20 crc kubenswrapper[4768]: I1124 18:07:20.987991 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.163633 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgpzc\" (UniqueName: \"kubernetes.io/projected/ce13a66a-1a9b-490b-8263-376c7e7a86d0-kube-api-access-mgpzc\") pod \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.163782 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce13a66a-1a9b-490b-8263-376c7e7a86d0-etc-machine-id\") pod \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.163838 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-config-data\") pod \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.163913 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce13a66a-1a9b-490b-8263-376c7e7a86d0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ce13a66a-1a9b-490b-8263-376c7e7a86d0" (UID: "ce13a66a-1a9b-490b-8263-376c7e7a86d0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.164043 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-scripts\") pod \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.164088 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-config-data-custom\") pod \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.164126 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-combined-ca-bundle\") pod \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\" (UID: \"ce13a66a-1a9b-490b-8263-376c7e7a86d0\") " Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.164767 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce13a66a-1a9b-490b-8263-376c7e7a86d0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.169913 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-scripts" (OuterVolumeSpecName: "scripts") pod "ce13a66a-1a9b-490b-8263-376c7e7a86d0" (UID: "ce13a66a-1a9b-490b-8263-376c7e7a86d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.170415 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ce13a66a-1a9b-490b-8263-376c7e7a86d0" (UID: "ce13a66a-1a9b-490b-8263-376c7e7a86d0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.170594 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce13a66a-1a9b-490b-8263-376c7e7a86d0-kube-api-access-mgpzc" (OuterVolumeSpecName: "kube-api-access-mgpzc") pod "ce13a66a-1a9b-490b-8263-376c7e7a86d0" (UID: "ce13a66a-1a9b-490b-8263-376c7e7a86d0"). InnerVolumeSpecName "kube-api-access-mgpzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.239676 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce13a66a-1a9b-490b-8263-376c7e7a86d0" (UID: "ce13a66a-1a9b-490b-8263-376c7e7a86d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.266566 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.266597 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.266609 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.266617 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgpzc\" (UniqueName: \"kubernetes.io/projected/ce13a66a-1a9b-490b-8263-376c7e7a86d0-kube-api-access-mgpzc\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.272301 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-config-data" (OuterVolumeSpecName: "config-data") pod "ce13a66a-1a9b-490b-8263-376c7e7a86d0" (UID: "ce13a66a-1a9b-490b-8263-376c7e7a86d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.368366 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce13a66a-1a9b-490b-8263-376c7e7a86d0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.391105 4768 generic.go:334] "Generic (PLEG): container finished" podID="ce13a66a-1a9b-490b-8263-376c7e7a86d0" containerID="e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092" exitCode=0 Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.391176 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ce13a66a-1a9b-490b-8263-376c7e7a86d0","Type":"ContainerDied","Data":"e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092"} Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.391228 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ce13a66a-1a9b-490b-8263-376c7e7a86d0","Type":"ContainerDied","Data":"fd9e350c535d79c1eeaffd7d08cb81b40abf7939f7719efa9646d662c9cdcd0c"} Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.391222 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.391261 4768 scope.go:117] "RemoveContainer" containerID="e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.444393 4768 scope.go:117] "RemoveContainer" containerID="e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.447320 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.474611 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.481548 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 18:07:21 crc kubenswrapper[4768]: E1124 18:07:21.482009 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce13a66a-1a9b-490b-8263-376c7e7a86d0" containerName="probe" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.482032 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce13a66a-1a9b-490b-8263-376c7e7a86d0" containerName="probe" Nov 24 18:07:21 crc kubenswrapper[4768]: E1124 18:07:21.482050 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9aef2aa-c3bb-4b06-b204-7b557645d5e7" containerName="dnsmasq-dns" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.482056 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9aef2aa-c3bb-4b06-b204-7b557645d5e7" containerName="dnsmasq-dns" Nov 24 18:07:21 crc kubenswrapper[4768]: E1124 18:07:21.482071 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce13a66a-1a9b-490b-8263-376c7e7a86d0" containerName="cinder-scheduler" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.482078 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce13a66a-1a9b-490b-8263-376c7e7a86d0" containerName="cinder-scheduler" Nov 24 18:07:21 crc kubenswrapper[4768]: E1124 18:07:21.482089 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9aef2aa-c3bb-4b06-b204-7b557645d5e7" containerName="init" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.482095 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9aef2aa-c3bb-4b06-b204-7b557645d5e7" containerName="init" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.482245 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce13a66a-1a9b-490b-8263-376c7e7a86d0" containerName="cinder-scheduler" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.482244 4768 scope.go:117] "RemoveContainer" containerID="e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.482261 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce13a66a-1a9b-490b-8263-376c7e7a86d0" containerName="probe" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.482566 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9aef2aa-c3bb-4b06-b204-7b557645d5e7" containerName="dnsmasq-dns" Nov 24 18:07:21 crc kubenswrapper[4768]: E1124 18:07:21.483123 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b\": container with ID starting with e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b not found: ID does not exist" containerID="e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.483202 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b"} err="failed to get container status \"e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b\": rpc error: code = NotFound desc = could not find container \"e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b\": container with ID starting with e28230689d393d9868e4c2fd2f80c1b9e045995599874a5cbbfd40d1ba485f3b not found: ID does not exist" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.483266 4768 scope.go:117] "RemoveContainer" containerID="e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092" Nov 24 18:07:21 crc kubenswrapper[4768]: E1124 18:07:21.483692 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092\": container with ID starting with e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092 not found: ID does not exist" containerID="e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.483747 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092"} err="failed to get container status \"e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092\": rpc error: code = NotFound desc = could not find container \"e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092\": container with ID starting with e345b6d2594806ffef02f4ee0d0a8a3e1aaf15afc65fc901f0499e2fdccda092 not found: ID does not exist" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.484098 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.486554 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.494660 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.573935 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnhgq\" (UniqueName: \"kubernetes.io/projected/40369462-11a9-45f0-ad9b-cec7971e9414-kube-api-access-xnhgq\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.574000 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40369462-11a9-45f0-ad9b-cec7971e9414-scripts\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.574035 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40369462-11a9-45f0-ad9b-cec7971e9414-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.574214 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40369462-11a9-45f0-ad9b-cec7971e9414-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.574300 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40369462-11a9-45f0-ad9b-cec7971e9414-config-data\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.574525 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40369462-11a9-45f0-ad9b-cec7971e9414-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.676538 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40369462-11a9-45f0-ad9b-cec7971e9414-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.676624 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnhgq\" (UniqueName: \"kubernetes.io/projected/40369462-11a9-45f0-ad9b-cec7971e9414-kube-api-access-xnhgq\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.676670 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40369462-11a9-45f0-ad9b-cec7971e9414-scripts\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.676720 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40369462-11a9-45f0-ad9b-cec7971e9414-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.676777 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40369462-11a9-45f0-ad9b-cec7971e9414-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.676910 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40369462-11a9-45f0-ad9b-cec7971e9414-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.676855 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40369462-11a9-45f0-ad9b-cec7971e9414-config-data\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.681954 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40369462-11a9-45f0-ad9b-cec7971e9414-config-data\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.692025 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40369462-11a9-45f0-ad9b-cec7971e9414-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.692994 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40369462-11a9-45f0-ad9b-cec7971e9414-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.693658 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40369462-11a9-45f0-ad9b-cec7971e9414-scripts\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.704275 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnhgq\" (UniqueName: \"kubernetes.io/projected/40369462-11a9-45f0-ad9b-cec7971e9414-kube-api-access-xnhgq\") pod \"cinder-scheduler-0\" (UID: \"40369462-11a9-45f0-ad9b-cec7971e9414\") " pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.808645 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 18:07:21 crc kubenswrapper[4768]: I1124 18:07:21.918718 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce13a66a-1a9b-490b-8263-376c7e7a86d0" path="/var/lib/kubelet/pods/ce13a66a-1a9b-490b-8263-376c7e7a86d0/volumes" Nov 24 18:07:22 crc kubenswrapper[4768]: I1124 18:07:22.355031 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 18:07:22 crc kubenswrapper[4768]: I1124 18:07:22.404674 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"40369462-11a9-45f0-ad9b-cec7971e9414","Type":"ContainerStarted","Data":"7e612e09ea3ff1bb8b5659128e889ebf4f272c29c1018115a20e9d4fa2a31e89"} Nov 24 18:07:22 crc kubenswrapper[4768]: I1124 18:07:22.950694 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-56748c45b5-4df84" Nov 24 18:07:22 crc kubenswrapper[4768]: I1124 18:07:22.956093 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.012873 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-76b54949f4-59kjn" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.346724 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.348401 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.350905 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.351535 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.351661 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-dstf8" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.362701 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.409906 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gj4z\" (UniqueName: \"kubernetes.io/projected/e5ca5655-0b68-4c97-984f-2085144d98dc-kube-api-access-7gj4z\") pod \"openstackclient\" (UID: \"e5ca5655-0b68-4c97-984f-2085144d98dc\") " pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.409956 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ca5655-0b68-4c97-984f-2085144d98dc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e5ca5655-0b68-4c97-984f-2085144d98dc\") " pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.410004 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5ca5655-0b68-4c97-984f-2085144d98dc-openstack-config\") pod \"openstackclient\" (UID: \"e5ca5655-0b68-4c97-984f-2085144d98dc\") " pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.410139 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5ca5655-0b68-4c97-984f-2085144d98dc-openstack-config-secret\") pod \"openstackclient\" (UID: \"e5ca5655-0b68-4c97-984f-2085144d98dc\") " pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.435631 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"40369462-11a9-45f0-ad9b-cec7971e9414","Type":"ContainerStarted","Data":"f1780d3466fc1d65938616c6b40926a06136f605743612afe6c0cb2666a0ab85"} Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.512662 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5ca5655-0b68-4c97-984f-2085144d98dc-openstack-config\") pod \"openstackclient\" (UID: \"e5ca5655-0b68-4c97-984f-2085144d98dc\") " pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.512876 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5ca5655-0b68-4c97-984f-2085144d98dc-openstack-config-secret\") pod \"openstackclient\" (UID: \"e5ca5655-0b68-4c97-984f-2085144d98dc\") " pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.512946 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gj4z\" (UniqueName: \"kubernetes.io/projected/e5ca5655-0b68-4c97-984f-2085144d98dc-kube-api-access-7gj4z\") pod \"openstackclient\" (UID: \"e5ca5655-0b68-4c97-984f-2085144d98dc\") " pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.512978 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ca5655-0b68-4c97-984f-2085144d98dc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e5ca5655-0b68-4c97-984f-2085144d98dc\") " pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.524113 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ca5655-0b68-4c97-984f-2085144d98dc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e5ca5655-0b68-4c97-984f-2085144d98dc\") " pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.528855 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5ca5655-0b68-4c97-984f-2085144d98dc-openstack-config-secret\") pod \"openstackclient\" (UID: \"e5ca5655-0b68-4c97-984f-2085144d98dc\") " pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.530019 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5ca5655-0b68-4c97-984f-2085144d98dc-openstack-config\") pod \"openstackclient\" (UID: \"e5ca5655-0b68-4c97-984f-2085144d98dc\") " pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.544260 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gj4z\" (UniqueName: \"kubernetes.io/projected/e5ca5655-0b68-4c97-984f-2085144d98dc-kube-api-access-7gj4z\") pod \"openstackclient\" (UID: \"e5ca5655-0b68-4c97-984f-2085144d98dc\") " pod="openstack/openstackclient" Nov 24 18:07:23 crc kubenswrapper[4768]: I1124 18:07:23.697351 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 18:07:24 crc kubenswrapper[4768]: I1124 18:07:24.190278 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 18:07:24 crc kubenswrapper[4768]: I1124 18:07:24.445416 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"40369462-11a9-45f0-ad9b-cec7971e9414","Type":"ContainerStarted","Data":"9a2f3672dba02644adf7b14cff36857d2a6e023f3b1f41952dceeb1d5428e1d4"} Nov 24 18:07:24 crc kubenswrapper[4768]: I1124 18:07:24.446771 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e5ca5655-0b68-4c97-984f-2085144d98dc","Type":"ContainerStarted","Data":"bf4761047436da19424f7437e2cff572a3c59bc39fdba08bbc1556e75cca0fd8"} Nov 24 18:07:24 crc kubenswrapper[4768]: I1124 18:07:24.471036 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.471010506 podStartE2EDuration="3.471010506s" podCreationTimestamp="2025-11-24 18:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:07:24.469683231 +0000 UTC m=+1083.330265028" watchObservedRunningTime="2025-11-24 18:07:24.471010506 +0000 UTC m=+1083.331592283" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.116783 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.274505 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-combined-ca-bundle\") pod \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.274679 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-ovndb-tls-certs\") pod \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.274788 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qcsd\" (UniqueName: \"kubernetes.io/projected/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-kube-api-access-6qcsd\") pod \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.274841 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-httpd-config\") pod \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.274869 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-config\") pod \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\" (UID: \"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2\") " Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.281740 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" (UID: "a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.283196 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-kube-api-access-6qcsd" (OuterVolumeSpecName: "kube-api-access-6qcsd") pod "a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" (UID: "a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2"). InnerVolumeSpecName "kube-api-access-6qcsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.331637 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" (UID: "a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.333759 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-config" (OuterVolumeSpecName: "config") pod "a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" (UID: "a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.361280 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" (UID: "a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.377103 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.377136 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.377146 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.377157 4768 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.377165 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qcsd\" (UniqueName: \"kubernetes.io/projected/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2-kube-api-access-6qcsd\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.467941 4768 generic.go:334] "Generic (PLEG): container finished" podID="a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" containerID="3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd" exitCode=0 Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.467993 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c9cf6cc78-ssqjz" event={"ID":"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2","Type":"ContainerDied","Data":"3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd"} Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.468012 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c9cf6cc78-ssqjz" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.468030 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c9cf6cc78-ssqjz" event={"ID":"a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2","Type":"ContainerDied","Data":"a7aadbb1366cff9530f57ca69991ffde95befaa9ede69671308f86926c1d9d41"} Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.468051 4768 scope.go:117] "RemoveContainer" containerID="9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.498006 4768 scope.go:117] "RemoveContainer" containerID="3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.508498 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c9cf6cc78-ssqjz"] Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.519758 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c9cf6cc78-ssqjz"] Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.529275 4768 scope.go:117] "RemoveContainer" containerID="9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87" Nov 24 18:07:26 crc kubenswrapper[4768]: E1124 18:07:26.529736 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87\": container with ID starting with 9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87 not found: ID does not exist" containerID="9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.529773 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87"} err="failed to get container status \"9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87\": rpc error: code = NotFound desc = could not find container \"9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87\": container with ID starting with 9fb5b205054d164d72767e2531977af50b945f16c18c1f64c816d0e5b07beb87 not found: ID does not exist" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.529811 4768 scope.go:117] "RemoveContainer" containerID="3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd" Nov 24 18:07:26 crc kubenswrapper[4768]: E1124 18:07:26.530085 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd\": container with ID starting with 3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd not found: ID does not exist" containerID="3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.530118 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd"} err="failed to get container status \"3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd\": rpc error: code = NotFound desc = could not find container \"3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd\": container with ID starting with 3ddfa8e64c5e73ba94186c36d6558b979e594181e30b123f278040f0645f85fd not found: ID does not exist" Nov 24 18:07:26 crc kubenswrapper[4768]: I1124 18:07:26.810044 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 18:07:27 crc kubenswrapper[4768]: I1124 18:07:27.907643 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" path="/var/lib/kubelet/pods/a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2/volumes" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.164069 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-6w89b"] Nov 24 18:07:29 crc kubenswrapper[4768]: E1124 18:07:29.164980 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" containerName="neutron-httpd" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.164999 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" containerName="neutron-httpd" Nov 24 18:07:29 crc kubenswrapper[4768]: E1124 18:07:29.165048 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" containerName="neutron-api" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.165057 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" containerName="neutron-api" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.165251 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" containerName="neutron-api" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.165280 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e2ca70-ec3c-4578-9f19-c05d6bb47fb2" containerName="neutron-httpd" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.166034 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6w89b" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.179795 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6w89b"] Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.235193 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6-operator-scripts\") pod \"nova-api-db-create-6w89b\" (UID: \"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6\") " pod="openstack/nova-api-db-create-6w89b" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.235314 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fqq8\" (UniqueName: \"kubernetes.io/projected/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6-kube-api-access-7fqq8\") pod \"nova-api-db-create-6w89b\" (UID: \"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6\") " pod="openstack/nova-api-db-create-6w89b" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.271901 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6ht94"] Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.273353 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6ht94" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.284294 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6ht94"] Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.337342 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbff70dc-2806-4b63-abc3-f4e5f69babe1-operator-scripts\") pod \"nova-cell0-db-create-6ht94\" (UID: \"bbff70dc-2806-4b63-abc3-f4e5f69babe1\") " pod="openstack/nova-cell0-db-create-6ht94" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.337519 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6-operator-scripts\") pod \"nova-api-db-create-6w89b\" (UID: \"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6\") " pod="openstack/nova-api-db-create-6w89b" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.337632 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tskct\" (UniqueName: \"kubernetes.io/projected/bbff70dc-2806-4b63-abc3-f4e5f69babe1-kube-api-access-tskct\") pod \"nova-cell0-db-create-6ht94\" (UID: \"bbff70dc-2806-4b63-abc3-f4e5f69babe1\") " pod="openstack/nova-cell0-db-create-6ht94" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.337665 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fqq8\" (UniqueName: \"kubernetes.io/projected/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6-kube-api-access-7fqq8\") pod \"nova-api-db-create-6w89b\" (UID: \"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6\") " pod="openstack/nova-api-db-create-6w89b" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.338379 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6-operator-scripts\") pod \"nova-api-db-create-6w89b\" (UID: \"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6\") " pod="openstack/nova-api-db-create-6w89b" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.365644 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fqq8\" (UniqueName: \"kubernetes.io/projected/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6-kube-api-access-7fqq8\") pod \"nova-api-db-create-6w89b\" (UID: \"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6\") " pod="openstack/nova-api-db-create-6w89b" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.374653 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-976b-account-create-cmd8x"] Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.376571 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-976b-account-create-cmd8x" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.379540 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.418057 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-976b-account-create-cmd8x"] Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.439407 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbff70dc-2806-4b63-abc3-f4e5f69babe1-operator-scripts\") pod \"nova-cell0-db-create-6ht94\" (UID: \"bbff70dc-2806-4b63-abc3-f4e5f69babe1\") " pod="openstack/nova-cell0-db-create-6ht94" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.439504 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bc00587-656d-47e3-bfa1-a722e4a72f2c-operator-scripts\") pod \"nova-api-976b-account-create-cmd8x\" (UID: \"5bc00587-656d-47e3-bfa1-a722e4a72f2c\") " pod="openstack/nova-api-976b-account-create-cmd8x" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.439536 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm9t8\" (UniqueName: \"kubernetes.io/projected/5bc00587-656d-47e3-bfa1-a722e4a72f2c-kube-api-access-xm9t8\") pod \"nova-api-976b-account-create-cmd8x\" (UID: \"5bc00587-656d-47e3-bfa1-a722e4a72f2c\") " pod="openstack/nova-api-976b-account-create-cmd8x" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.439578 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tskct\" (UniqueName: \"kubernetes.io/projected/bbff70dc-2806-4b63-abc3-f4e5f69babe1-kube-api-access-tskct\") pod \"nova-cell0-db-create-6ht94\" (UID: \"bbff70dc-2806-4b63-abc3-f4e5f69babe1\") " pod="openstack/nova-cell0-db-create-6ht94" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.440465 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbff70dc-2806-4b63-abc3-f4e5f69babe1-operator-scripts\") pod \"nova-cell0-db-create-6ht94\" (UID: \"bbff70dc-2806-4b63-abc3-f4e5f69babe1\") " pod="openstack/nova-cell0-db-create-6ht94" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.476576 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-w2rvq"] Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.477733 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-w2rvq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.479719 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tskct\" (UniqueName: \"kubernetes.io/projected/bbff70dc-2806-4b63-abc3-f4e5f69babe1-kube-api-access-tskct\") pod \"nova-cell0-db-create-6ht94\" (UID: \"bbff70dc-2806-4b63-abc3-f4e5f69babe1\") " pod="openstack/nova-cell0-db-create-6ht94" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.483734 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-w2rvq"] Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.506294 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6w89b" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.545580 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe008faf-7594-433a-90ad-8317cfb54dd2-operator-scripts\") pod \"nova-cell1-db-create-w2rvq\" (UID: \"fe008faf-7594-433a-90ad-8317cfb54dd2\") " pod="openstack/nova-cell1-db-create-w2rvq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.545963 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bc00587-656d-47e3-bfa1-a722e4a72f2c-operator-scripts\") pod \"nova-api-976b-account-create-cmd8x\" (UID: \"5bc00587-656d-47e3-bfa1-a722e4a72f2c\") " pod="openstack/nova-api-976b-account-create-cmd8x" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.546116 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm9t8\" (UniqueName: \"kubernetes.io/projected/5bc00587-656d-47e3-bfa1-a722e4a72f2c-kube-api-access-xm9t8\") pod \"nova-api-976b-account-create-cmd8x\" (UID: \"5bc00587-656d-47e3-bfa1-a722e4a72f2c\") " pod="openstack/nova-api-976b-account-create-cmd8x" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.546385 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjbpw\" (UniqueName: \"kubernetes.io/projected/fe008faf-7594-433a-90ad-8317cfb54dd2-kube-api-access-zjbpw\") pod \"nova-cell1-db-create-w2rvq\" (UID: \"fe008faf-7594-433a-90ad-8317cfb54dd2\") " pod="openstack/nova-cell1-db-create-w2rvq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.547560 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bc00587-656d-47e3-bfa1-a722e4a72f2c-operator-scripts\") pod \"nova-api-976b-account-create-cmd8x\" (UID: \"5bc00587-656d-47e3-bfa1-a722e4a72f2c\") " pod="openstack/nova-api-976b-account-create-cmd8x" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.585329 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm9t8\" (UniqueName: \"kubernetes.io/projected/5bc00587-656d-47e3-bfa1-a722e4a72f2c-kube-api-access-xm9t8\") pod \"nova-api-976b-account-create-cmd8x\" (UID: \"5bc00587-656d-47e3-bfa1-a722e4a72f2c\") " pod="openstack/nova-api-976b-account-create-cmd8x" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.598403 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6ht94" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.609248 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-a04b-account-create-5kp5p"] Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.613101 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a04b-account-create-5kp5p" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.618561 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.650917 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe008faf-7594-433a-90ad-8317cfb54dd2-operator-scripts\") pod \"nova-cell1-db-create-w2rvq\" (UID: \"fe008faf-7594-433a-90ad-8317cfb54dd2\") " pod="openstack/nova-cell1-db-create-w2rvq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.652082 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjbpw\" (UniqueName: \"kubernetes.io/projected/fe008faf-7594-433a-90ad-8317cfb54dd2-kube-api-access-zjbpw\") pod \"nova-cell1-db-create-w2rvq\" (UID: \"fe008faf-7594-433a-90ad-8317cfb54dd2\") " pod="openstack/nova-cell1-db-create-w2rvq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.652162 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe008faf-7594-433a-90ad-8317cfb54dd2-operator-scripts\") pod \"nova-cell1-db-create-w2rvq\" (UID: \"fe008faf-7594-433a-90ad-8317cfb54dd2\") " pod="openstack/nova-cell1-db-create-w2rvq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.660677 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a04b-account-create-5kp5p"] Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.674435 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjbpw\" (UniqueName: \"kubernetes.io/projected/fe008faf-7594-433a-90ad-8317cfb54dd2-kube-api-access-zjbpw\") pod \"nova-cell1-db-create-w2rvq\" (UID: \"fe008faf-7594-433a-90ad-8317cfb54dd2\") " pod="openstack/nova-cell1-db-create-w2rvq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.731406 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-976b-account-create-cmd8x" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.759086 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlmh9\" (UniqueName: \"kubernetes.io/projected/060e3ec5-bc92-41ba-be28-81705247ed9f-kube-api-access-vlmh9\") pod \"nova-cell0-a04b-account-create-5kp5p\" (UID: \"060e3ec5-bc92-41ba-be28-81705247ed9f\") " pod="openstack/nova-cell0-a04b-account-create-5kp5p" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.759168 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/060e3ec5-bc92-41ba-be28-81705247ed9f-operator-scripts\") pod \"nova-cell0-a04b-account-create-5kp5p\" (UID: \"060e3ec5-bc92-41ba-be28-81705247ed9f\") " pod="openstack/nova-cell0-a04b-account-create-5kp5p" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.776173 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8633-account-create-xgxhq"] Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.777359 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8633-account-create-xgxhq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.779791 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.797861 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8633-account-create-xgxhq"] Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.828705 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-w2rvq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.861880 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9017e12-9ea0-4e50-9723-980a39a62146-operator-scripts\") pod \"nova-cell1-8633-account-create-xgxhq\" (UID: \"a9017e12-9ea0-4e50-9723-980a39a62146\") " pod="openstack/nova-cell1-8633-account-create-xgxhq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.861939 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmttf\" (UniqueName: \"kubernetes.io/projected/a9017e12-9ea0-4e50-9723-980a39a62146-kube-api-access-nmttf\") pod \"nova-cell1-8633-account-create-xgxhq\" (UID: \"a9017e12-9ea0-4e50-9723-980a39a62146\") " pod="openstack/nova-cell1-8633-account-create-xgxhq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.862033 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlmh9\" (UniqueName: \"kubernetes.io/projected/060e3ec5-bc92-41ba-be28-81705247ed9f-kube-api-access-vlmh9\") pod \"nova-cell0-a04b-account-create-5kp5p\" (UID: \"060e3ec5-bc92-41ba-be28-81705247ed9f\") " pod="openstack/nova-cell0-a04b-account-create-5kp5p" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.862071 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/060e3ec5-bc92-41ba-be28-81705247ed9f-operator-scripts\") pod \"nova-cell0-a04b-account-create-5kp5p\" (UID: \"060e3ec5-bc92-41ba-be28-81705247ed9f\") " pod="openstack/nova-cell0-a04b-account-create-5kp5p" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.862977 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/060e3ec5-bc92-41ba-be28-81705247ed9f-operator-scripts\") pod \"nova-cell0-a04b-account-create-5kp5p\" (UID: \"060e3ec5-bc92-41ba-be28-81705247ed9f\") " pod="openstack/nova-cell0-a04b-account-create-5kp5p" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.881154 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlmh9\" (UniqueName: \"kubernetes.io/projected/060e3ec5-bc92-41ba-be28-81705247ed9f-kube-api-access-vlmh9\") pod \"nova-cell0-a04b-account-create-5kp5p\" (UID: \"060e3ec5-bc92-41ba-be28-81705247ed9f\") " pod="openstack/nova-cell0-a04b-account-create-5kp5p" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.964384 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9017e12-9ea0-4e50-9723-980a39a62146-operator-scripts\") pod \"nova-cell1-8633-account-create-xgxhq\" (UID: \"a9017e12-9ea0-4e50-9723-980a39a62146\") " pod="openstack/nova-cell1-8633-account-create-xgxhq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.964447 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmttf\" (UniqueName: \"kubernetes.io/projected/a9017e12-9ea0-4e50-9723-980a39a62146-kube-api-access-nmttf\") pod \"nova-cell1-8633-account-create-xgxhq\" (UID: \"a9017e12-9ea0-4e50-9723-980a39a62146\") " pod="openstack/nova-cell1-8633-account-create-xgxhq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.965179 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9017e12-9ea0-4e50-9723-980a39a62146-operator-scripts\") pod \"nova-cell1-8633-account-create-xgxhq\" (UID: \"a9017e12-9ea0-4e50-9723-980a39a62146\") " pod="openstack/nova-cell1-8633-account-create-xgxhq" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.972382 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a04b-account-create-5kp5p" Nov 24 18:07:29 crc kubenswrapper[4768]: I1124 18:07:29.984024 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmttf\" (UniqueName: \"kubernetes.io/projected/a9017e12-9ea0-4e50-9723-980a39a62146-kube-api-access-nmttf\") pod \"nova-cell1-8633-account-create-xgxhq\" (UID: \"a9017e12-9ea0-4e50-9723-980a39a62146\") " pod="openstack/nova-cell1-8633-account-create-xgxhq" Nov 24 18:07:30 crc kubenswrapper[4768]: I1124 18:07:30.102553 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8633-account-create-xgxhq" Nov 24 18:07:32 crc kubenswrapper[4768]: I1124 18:07:32.024153 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.040603 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-976b-account-create-cmd8x"] Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.147182 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6ht94"] Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.155935 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a04b-account-create-5kp5p"] Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.167776 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-w2rvq"] Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.312091 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6w89b"] Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.320772 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8633-account-create-xgxhq"] Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.593515 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8633-account-create-xgxhq" event={"ID":"a9017e12-9ea0-4e50-9723-980a39a62146","Type":"ContainerStarted","Data":"690855d94e48790ddc08be66c5a249234c9e67d492a2ee6a939071520e919d37"} Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.596307 4768 generic.go:334] "Generic (PLEG): container finished" podID="5bc00587-656d-47e3-bfa1-a722e4a72f2c" containerID="4948506a3e29d8d58f7d562879c95934f1cc8eb7d97e904510f8729bc30821ad" exitCode=0 Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.596380 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-976b-account-create-cmd8x" event={"ID":"5bc00587-656d-47e3-bfa1-a722e4a72f2c","Type":"ContainerDied","Data":"4948506a3e29d8d58f7d562879c95934f1cc8eb7d97e904510f8729bc30821ad"} Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.596412 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-976b-account-create-cmd8x" event={"ID":"5bc00587-656d-47e3-bfa1-a722e4a72f2c","Type":"ContainerStarted","Data":"e187f1a7591f441e9a547252caf3fdbcfd17b6daba4f52c4258e9f601bb14839"} Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.598012 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6ht94" event={"ID":"bbff70dc-2806-4b63-abc3-f4e5f69babe1","Type":"ContainerStarted","Data":"adc5dccce10a7476d8ae5455c617731a4f73e5ae2a8e3d95dae4aaaa2a8365e2"} Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.598056 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6ht94" event={"ID":"bbff70dc-2806-4b63-abc3-f4e5f69babe1","Type":"ContainerStarted","Data":"ca44c35190354a476ba562ca9ceb5fff35b7c9e6c5873ca5e914ec1b963977f2"} Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.600104 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a04b-account-create-5kp5p" event={"ID":"060e3ec5-bc92-41ba-be28-81705247ed9f","Type":"ContainerStarted","Data":"b183fabfb27b7a8a35767088d942edbcf0f62f3af619354c9815de2624af25ac"} Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.600139 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a04b-account-create-5kp5p" event={"ID":"060e3ec5-bc92-41ba-be28-81705247ed9f","Type":"ContainerStarted","Data":"8da091a541640983d440aa0842eec0f3d3d3e024d41c71bae480da04b9b3913a"} Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.602419 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-w2rvq" event={"ID":"fe008faf-7594-433a-90ad-8317cfb54dd2","Type":"ContainerStarted","Data":"da3a805140c40a4c232a97e19b78bf8c4b43f7dc00c8e3187d6435086676ad45"} Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.602448 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-w2rvq" event={"ID":"fe008faf-7594-433a-90ad-8317cfb54dd2","Type":"ContainerStarted","Data":"22498c1fda416bd3524afea326cdae7ad27cd4444d55457c9ee69fdf6e7a00db"} Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.606910 4768 generic.go:334] "Generic (PLEG): container finished" podID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerID="1fbb8e53a79f7088c31c3555ebbfaba7165322e4e5bee316942049807b77bd08" exitCode=137 Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.607142 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0b8cf78-9bbe-44cd-8907-78fd9548d712","Type":"ContainerDied","Data":"1fbb8e53a79f7088c31c3555ebbfaba7165322e4e5bee316942049807b77bd08"} Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.608908 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6w89b" event={"ID":"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6","Type":"ContainerStarted","Data":"3ec78ff71758fc24ecbe97a92ef2ac907503f72cd3b372b5b764c9060348a821"} Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.633566 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-6ht94" podStartSLOduration=6.633472439 podStartE2EDuration="6.633472439s" podCreationTimestamp="2025-11-24 18:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:07:35.627221062 +0000 UTC m=+1094.487802839" watchObservedRunningTime="2025-11-24 18:07:35.633472439 +0000 UTC m=+1094.494054226" Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.649788 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-a04b-account-create-5kp5p" podStartSLOduration=6.649765422 podStartE2EDuration="6.649765422s" podCreationTimestamp="2025-11-24 18:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:07:35.644580244 +0000 UTC m=+1094.505162021" watchObservedRunningTime="2025-11-24 18:07:35.649765422 +0000 UTC m=+1094.510347199" Nov 24 18:07:35 crc kubenswrapper[4768]: I1124 18:07:35.661709 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-w2rvq" podStartSLOduration=6.661686399 podStartE2EDuration="6.661686399s" podCreationTimestamp="2025-11-24 18:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:07:35.660529727 +0000 UTC m=+1094.521111524" watchObservedRunningTime="2025-11-24 18:07:35.661686399 +0000 UTC m=+1094.522268176" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.049842 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.182689 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0b8cf78-9bbe-44cd-8907-78fd9548d712-run-httpd\") pod \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.182787 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0b8cf78-9bbe-44cd-8907-78fd9548d712-log-httpd\") pod \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.182878 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mdvg\" (UniqueName: \"kubernetes.io/projected/d0b8cf78-9bbe-44cd-8907-78fd9548d712-kube-api-access-9mdvg\") pod \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.182909 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-combined-ca-bundle\") pod \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.183027 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-scripts\") pod \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.183092 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-config-data\") pod \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.183116 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-sg-core-conf-yaml\") pod \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\" (UID: \"d0b8cf78-9bbe-44cd-8907-78fd9548d712\") " Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.183248 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0b8cf78-9bbe-44cd-8907-78fd9548d712-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d0b8cf78-9bbe-44cd-8907-78fd9548d712" (UID: "d0b8cf78-9bbe-44cd-8907-78fd9548d712"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.183497 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0b8cf78-9bbe-44cd-8907-78fd9548d712-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.184093 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0b8cf78-9bbe-44cd-8907-78fd9548d712-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d0b8cf78-9bbe-44cd-8907-78fd9548d712" (UID: "d0b8cf78-9bbe-44cd-8907-78fd9548d712"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.197102 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b8cf78-9bbe-44cd-8907-78fd9548d712-kube-api-access-9mdvg" (OuterVolumeSpecName: "kube-api-access-9mdvg") pod "d0b8cf78-9bbe-44cd-8907-78fd9548d712" (UID: "d0b8cf78-9bbe-44cd-8907-78fd9548d712"). InnerVolumeSpecName "kube-api-access-9mdvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.201645 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-scripts" (OuterVolumeSpecName: "scripts") pod "d0b8cf78-9bbe-44cd-8907-78fd9548d712" (UID: "d0b8cf78-9bbe-44cd-8907-78fd9548d712"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.218582 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d0b8cf78-9bbe-44cd-8907-78fd9548d712" (UID: "d0b8cf78-9bbe-44cd-8907-78fd9548d712"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.285722 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.285762 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0b8cf78-9bbe-44cd-8907-78fd9548d712-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.285773 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mdvg\" (UniqueName: \"kubernetes.io/projected/d0b8cf78-9bbe-44cd-8907-78fd9548d712-kube-api-access-9mdvg\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.285784 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.297647 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0b8cf78-9bbe-44cd-8907-78fd9548d712" (UID: "d0b8cf78-9bbe-44cd-8907-78fd9548d712"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.315855 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-config-data" (OuterVolumeSpecName: "config-data") pod "d0b8cf78-9bbe-44cd-8907-78fd9548d712" (UID: "d0b8cf78-9bbe-44cd-8907-78fd9548d712"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.387387 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.387421 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0b8cf78-9bbe-44cd-8907-78fd9548d712-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.619277 4768 generic.go:334] "Generic (PLEG): container finished" podID="060e3ec5-bc92-41ba-be28-81705247ed9f" containerID="b183fabfb27b7a8a35767088d942edbcf0f62f3af619354c9815de2624af25ac" exitCode=0 Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.619356 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a04b-account-create-5kp5p" event={"ID":"060e3ec5-bc92-41ba-be28-81705247ed9f","Type":"ContainerDied","Data":"b183fabfb27b7a8a35767088d942edbcf0f62f3af619354c9815de2624af25ac"} Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.621470 4768 generic.go:334] "Generic (PLEG): container finished" podID="fe008faf-7594-433a-90ad-8317cfb54dd2" containerID="da3a805140c40a4c232a97e19b78bf8c4b43f7dc00c8e3187d6435086676ad45" exitCode=0 Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.621570 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-w2rvq" event={"ID":"fe008faf-7594-433a-90ad-8317cfb54dd2","Type":"ContainerDied","Data":"da3a805140c40a4c232a97e19b78bf8c4b43f7dc00c8e3187d6435086676ad45"} Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.631562 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0b8cf78-9bbe-44cd-8907-78fd9548d712","Type":"ContainerDied","Data":"d9f1c6a89d928104d6d03b36d146df7bef337e950d08421889dc753dbeef4178"} Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.631614 4768 scope.go:117] "RemoveContainer" containerID="1fbb8e53a79f7088c31c3555ebbfaba7165322e4e5bee316942049807b77bd08" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.631817 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.639930 4768 generic.go:334] "Generic (PLEG): container finished" podID="8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6" containerID="29b4af4f0909cde5307e0e956fb1ef96b0bd1b9c913015bbf5519ff9ea20f89b" exitCode=0 Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.639988 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6w89b" event={"ID":"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6","Type":"ContainerDied","Data":"29b4af4f0909cde5307e0e956fb1ef96b0bd1b9c913015bbf5519ff9ea20f89b"} Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.645254 4768 generic.go:334] "Generic (PLEG): container finished" podID="a9017e12-9ea0-4e50-9723-980a39a62146" containerID="f403f3cc33f9293e30e21fda3d14e53ba866dc2363087a278f538ee139276367" exitCode=0 Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.645313 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8633-account-create-xgxhq" event={"ID":"a9017e12-9ea0-4e50-9723-980a39a62146","Type":"ContainerDied","Data":"f403f3cc33f9293e30e21fda3d14e53ba866dc2363087a278f538ee139276367"} Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.648114 4768 generic.go:334] "Generic (PLEG): container finished" podID="bbff70dc-2806-4b63-abc3-f4e5f69babe1" containerID="adc5dccce10a7476d8ae5455c617731a4f73e5ae2a8e3d95dae4aaaa2a8365e2" exitCode=0 Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.648160 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6ht94" event={"ID":"bbff70dc-2806-4b63-abc3-f4e5f69babe1","Type":"ContainerDied","Data":"adc5dccce10a7476d8ae5455c617731a4f73e5ae2a8e3d95dae4aaaa2a8365e2"} Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.661050 4768 scope.go:117] "RemoveContainer" containerID="9b37a8519bd267bcba127733196da828e65626303691e6ce5e84b3d746b30ea9" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.672690 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e5ca5655-0b68-4c97-984f-2085144d98dc","Type":"ContainerStarted","Data":"6edf5fdd0c7576dcd56bf44755dde54dc5f23875036c68683ba85073c21f55a6"} Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.714778 4768 scope.go:117] "RemoveContainer" containerID="756c8026532e94696fa6b9fa0598cd4361a90365b3db883f9864b4341d9ed87d" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.748550 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.50333071 podStartE2EDuration="13.748532182s" podCreationTimestamp="2025-11-24 18:07:23 +0000 UTC" firstStartedPulling="2025-11-24 18:07:24.226457387 +0000 UTC m=+1083.087039164" lastFinishedPulling="2025-11-24 18:07:35.471658859 +0000 UTC m=+1094.332240636" observedRunningTime="2025-11-24 18:07:36.738870164 +0000 UTC m=+1095.599451941" watchObservedRunningTime="2025-11-24 18:07:36.748532182 +0000 UTC m=+1095.609113959" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.793450 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.810999 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.819941 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:36 crc kubenswrapper[4768]: E1124 18:07:36.820311 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="sg-core" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.820324 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="sg-core" Nov 24 18:07:36 crc kubenswrapper[4768]: E1124 18:07:36.820335 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="proxy-httpd" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.820341 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="proxy-httpd" Nov 24 18:07:36 crc kubenswrapper[4768]: E1124 18:07:36.820365 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="ceilometer-notification-agent" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.820372 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="ceilometer-notification-agent" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.820563 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="proxy-httpd" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.820578 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="ceilometer-notification-agent" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.820591 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" containerName="sg-core" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.822190 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.824884 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.828609 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.829033 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.896122 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.896248 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5qwz\" (UniqueName: \"kubernetes.io/projected/fd6e81df-703b-41cc-853c-3c1257786d5c-kube-api-access-q5qwz\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.896304 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd6e81df-703b-41cc-853c-3c1257786d5c-log-httpd\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.896328 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-scripts\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.896366 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-config-data\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.896449 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd6e81df-703b-41cc-853c-3c1257786d5c-run-httpd\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.896505 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.929058 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:36 crc kubenswrapper[4768]: E1124 18:07:36.929765 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-q5qwz log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="fd6e81df-703b-41cc-853c-3c1257786d5c" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.998308 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd6e81df-703b-41cc-853c-3c1257786d5c-run-httpd\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.998784 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.998839 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.998887 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5qwz\" (UniqueName: \"kubernetes.io/projected/fd6e81df-703b-41cc-853c-3c1257786d5c-kube-api-access-q5qwz\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.998919 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd6e81df-703b-41cc-853c-3c1257786d5c-log-httpd\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.998936 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-scripts\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.998960 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-config-data\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:36 crc kubenswrapper[4768]: I1124 18:07:36.999867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd6e81df-703b-41cc-853c-3c1257786d5c-log-httpd\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.004995 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd6e81df-703b-41cc-853c-3c1257786d5c-run-httpd\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.010088 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-scripts\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.011284 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.017648 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-config-data\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.020282 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.028749 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5qwz\" (UniqueName: \"kubernetes.io/projected/fd6e81df-703b-41cc-853c-3c1257786d5c-kube-api-access-q5qwz\") pod \"ceilometer-0\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " pod="openstack/ceilometer-0" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.094938 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-976b-account-create-cmd8x" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.205396 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bc00587-656d-47e3-bfa1-a722e4a72f2c-operator-scripts\") pod \"5bc00587-656d-47e3-bfa1-a722e4a72f2c\" (UID: \"5bc00587-656d-47e3-bfa1-a722e4a72f2c\") " Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.205438 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm9t8\" (UniqueName: \"kubernetes.io/projected/5bc00587-656d-47e3-bfa1-a722e4a72f2c-kube-api-access-xm9t8\") pod \"5bc00587-656d-47e3-bfa1-a722e4a72f2c\" (UID: \"5bc00587-656d-47e3-bfa1-a722e4a72f2c\") " Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.205893 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc00587-656d-47e3-bfa1-a722e4a72f2c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5bc00587-656d-47e3-bfa1-a722e4a72f2c" (UID: "5bc00587-656d-47e3-bfa1-a722e4a72f2c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.205975 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bc00587-656d-47e3-bfa1-a722e4a72f2c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.208660 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc00587-656d-47e3-bfa1-a722e4a72f2c-kube-api-access-xm9t8" (OuterVolumeSpecName: "kube-api-access-xm9t8") pod "5bc00587-656d-47e3-bfa1-a722e4a72f2c" (UID: "5bc00587-656d-47e3-bfa1-a722e4a72f2c"). InnerVolumeSpecName "kube-api-access-xm9t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.308103 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm9t8\" (UniqueName: \"kubernetes.io/projected/5bc00587-656d-47e3-bfa1-a722e4a72f2c-kube-api-access-xm9t8\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.681458 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-976b-account-create-cmd8x" event={"ID":"5bc00587-656d-47e3-bfa1-a722e4a72f2c","Type":"ContainerDied","Data":"e187f1a7591f441e9a547252caf3fdbcfd17b6daba4f52c4258e9f601bb14839"} Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.681540 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-976b-account-create-cmd8x" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.681555 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e187f1a7591f441e9a547252caf3fdbcfd17b6daba4f52c4258e9f601bb14839" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.683467 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.697619 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.820265 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-config-data\") pod \"fd6e81df-703b-41cc-853c-3c1257786d5c\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.820350 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-combined-ca-bundle\") pod \"fd6e81df-703b-41cc-853c-3c1257786d5c\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.820391 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd6e81df-703b-41cc-853c-3c1257786d5c-log-httpd\") pod \"fd6e81df-703b-41cc-853c-3c1257786d5c\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.820513 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5qwz\" (UniqueName: \"kubernetes.io/projected/fd6e81df-703b-41cc-853c-3c1257786d5c-kube-api-access-q5qwz\") pod \"fd6e81df-703b-41cc-853c-3c1257786d5c\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.820625 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd6e81df-703b-41cc-853c-3c1257786d5c-run-httpd\") pod \"fd6e81df-703b-41cc-853c-3c1257786d5c\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.820672 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-sg-core-conf-yaml\") pod \"fd6e81df-703b-41cc-853c-3c1257786d5c\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.820748 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-scripts\") pod \"fd6e81df-703b-41cc-853c-3c1257786d5c\" (UID: \"fd6e81df-703b-41cc-853c-3c1257786d5c\") " Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.820770 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6e81df-703b-41cc-853c-3c1257786d5c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd6e81df-703b-41cc-853c-3c1257786d5c" (UID: "fd6e81df-703b-41cc-853c-3c1257786d5c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.821010 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6e81df-703b-41cc-853c-3c1257786d5c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd6e81df-703b-41cc-853c-3c1257786d5c" (UID: "fd6e81df-703b-41cc-853c-3c1257786d5c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.821436 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd6e81df-703b-41cc-853c-3c1257786d5c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.821459 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd6e81df-703b-41cc-853c-3c1257786d5c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.827747 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd6e81df-703b-41cc-853c-3c1257786d5c" (UID: "fd6e81df-703b-41cc-853c-3c1257786d5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.827912 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd6e81df-703b-41cc-853c-3c1257786d5c" (UID: "fd6e81df-703b-41cc-853c-3c1257786d5c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.828026 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-scripts" (OuterVolumeSpecName: "scripts") pod "fd6e81df-703b-41cc-853c-3c1257786d5c" (UID: "fd6e81df-703b-41cc-853c-3c1257786d5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.831881 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-config-data" (OuterVolumeSpecName: "config-data") pod "fd6e81df-703b-41cc-853c-3c1257786d5c" (UID: "fd6e81df-703b-41cc-853c-3c1257786d5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.842707 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6e81df-703b-41cc-853c-3c1257786d5c-kube-api-access-q5qwz" (OuterVolumeSpecName: "kube-api-access-q5qwz") pod "fd6e81df-703b-41cc-853c-3c1257786d5c" (UID: "fd6e81df-703b-41cc-853c-3c1257786d5c"). InnerVolumeSpecName "kube-api-access-q5qwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.918934 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b8cf78-9bbe-44cd-8907-78fd9548d712" path="/var/lib/kubelet/pods/d0b8cf78-9bbe-44cd-8907-78fd9548d712/volumes" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.922470 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.922525 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.922539 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.922552 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5qwz\" (UniqueName: \"kubernetes.io/projected/fd6e81df-703b-41cc-853c-3c1257786d5c-kube-api-access-q5qwz\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:37 crc kubenswrapper[4768]: I1124 18:07:37.922563 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd6e81df-703b-41cc-853c-3c1257786d5c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.109157 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-w2rvq" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.226924 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe008faf-7594-433a-90ad-8317cfb54dd2-operator-scripts\") pod \"fe008faf-7594-433a-90ad-8317cfb54dd2\" (UID: \"fe008faf-7594-433a-90ad-8317cfb54dd2\") " Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.227012 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjbpw\" (UniqueName: \"kubernetes.io/projected/fe008faf-7594-433a-90ad-8317cfb54dd2-kube-api-access-zjbpw\") pod \"fe008faf-7594-433a-90ad-8317cfb54dd2\" (UID: \"fe008faf-7594-433a-90ad-8317cfb54dd2\") " Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.229712 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe008faf-7594-433a-90ad-8317cfb54dd2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe008faf-7594-433a-90ad-8317cfb54dd2" (UID: "fe008faf-7594-433a-90ad-8317cfb54dd2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.235304 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe008faf-7594-433a-90ad-8317cfb54dd2-kube-api-access-zjbpw" (OuterVolumeSpecName: "kube-api-access-zjbpw") pod "fe008faf-7594-433a-90ad-8317cfb54dd2" (UID: "fe008faf-7594-433a-90ad-8317cfb54dd2"). InnerVolumeSpecName "kube-api-access-zjbpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.242328 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6ht94" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.264022 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6w89b" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.272622 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8633-account-create-xgxhq" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.279653 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a04b-account-create-5kp5p" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.329054 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6-operator-scripts\") pod \"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6\" (UID: \"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6\") " Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.329118 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tskct\" (UniqueName: \"kubernetes.io/projected/bbff70dc-2806-4b63-abc3-f4e5f69babe1-kube-api-access-tskct\") pod \"bbff70dc-2806-4b63-abc3-f4e5f69babe1\" (UID: \"bbff70dc-2806-4b63-abc3-f4e5f69babe1\") " Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.329187 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbff70dc-2806-4b63-abc3-f4e5f69babe1-operator-scripts\") pod \"bbff70dc-2806-4b63-abc3-f4e5f69babe1\" (UID: \"bbff70dc-2806-4b63-abc3-f4e5f69babe1\") " Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.329219 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9017e12-9ea0-4e50-9723-980a39a62146-operator-scripts\") pod \"a9017e12-9ea0-4e50-9723-980a39a62146\" (UID: \"a9017e12-9ea0-4e50-9723-980a39a62146\") " Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.329250 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmttf\" (UniqueName: \"kubernetes.io/projected/a9017e12-9ea0-4e50-9723-980a39a62146-kube-api-access-nmttf\") pod \"a9017e12-9ea0-4e50-9723-980a39a62146\" (UID: \"a9017e12-9ea0-4e50-9723-980a39a62146\") " Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.329325 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fqq8\" (UniqueName: \"kubernetes.io/projected/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6-kube-api-access-7fqq8\") pod \"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6\" (UID: \"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6\") " Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.329655 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6" (UID: "8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.329963 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbff70dc-2806-4b63-abc3-f4e5f69babe1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbff70dc-2806-4b63-abc3-f4e5f69babe1" (UID: "bbff70dc-2806-4b63-abc3-f4e5f69babe1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.330027 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe008faf-7594-433a-90ad-8317cfb54dd2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.330049 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.330063 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjbpw\" (UniqueName: \"kubernetes.io/projected/fe008faf-7594-433a-90ad-8317cfb54dd2-kube-api-access-zjbpw\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.330094 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9017e12-9ea0-4e50-9723-980a39a62146-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9017e12-9ea0-4e50-9723-980a39a62146" (UID: "a9017e12-9ea0-4e50-9723-980a39a62146"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.333130 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbff70dc-2806-4b63-abc3-f4e5f69babe1-kube-api-access-tskct" (OuterVolumeSpecName: "kube-api-access-tskct") pod "bbff70dc-2806-4b63-abc3-f4e5f69babe1" (UID: "bbff70dc-2806-4b63-abc3-f4e5f69babe1"). InnerVolumeSpecName "kube-api-access-tskct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.333764 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6-kube-api-access-7fqq8" (OuterVolumeSpecName: "kube-api-access-7fqq8") pod "8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6" (UID: "8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6"). InnerVolumeSpecName "kube-api-access-7fqq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.333815 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9017e12-9ea0-4e50-9723-980a39a62146-kube-api-access-nmttf" (OuterVolumeSpecName: "kube-api-access-nmttf") pod "a9017e12-9ea0-4e50-9723-980a39a62146" (UID: "a9017e12-9ea0-4e50-9723-980a39a62146"). InnerVolumeSpecName "kube-api-access-nmttf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.431830 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/060e3ec5-bc92-41ba-be28-81705247ed9f-operator-scripts\") pod \"060e3ec5-bc92-41ba-be28-81705247ed9f\" (UID: \"060e3ec5-bc92-41ba-be28-81705247ed9f\") " Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.432036 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlmh9\" (UniqueName: \"kubernetes.io/projected/060e3ec5-bc92-41ba-be28-81705247ed9f-kube-api-access-vlmh9\") pod \"060e3ec5-bc92-41ba-be28-81705247ed9f\" (UID: \"060e3ec5-bc92-41ba-be28-81705247ed9f\") " Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.432313 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/060e3ec5-bc92-41ba-be28-81705247ed9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "060e3ec5-bc92-41ba-be28-81705247ed9f" (UID: "060e3ec5-bc92-41ba-be28-81705247ed9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.432565 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/060e3ec5-bc92-41ba-be28-81705247ed9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.432589 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fqq8\" (UniqueName: \"kubernetes.io/projected/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6-kube-api-access-7fqq8\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.432602 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tskct\" (UniqueName: \"kubernetes.io/projected/bbff70dc-2806-4b63-abc3-f4e5f69babe1-kube-api-access-tskct\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.432614 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbff70dc-2806-4b63-abc3-f4e5f69babe1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.432626 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9017e12-9ea0-4e50-9723-980a39a62146-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.432638 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmttf\" (UniqueName: \"kubernetes.io/projected/a9017e12-9ea0-4e50-9723-980a39a62146-kube-api-access-nmttf\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.435900 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/060e3ec5-bc92-41ba-be28-81705247ed9f-kube-api-access-vlmh9" (OuterVolumeSpecName: "kube-api-access-vlmh9") pod "060e3ec5-bc92-41ba-be28-81705247ed9f" (UID: "060e3ec5-bc92-41ba-be28-81705247ed9f"). InnerVolumeSpecName "kube-api-access-vlmh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.534500 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlmh9\" (UniqueName: \"kubernetes.io/projected/060e3ec5-bc92-41ba-be28-81705247ed9f-kube-api-access-vlmh9\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.692604 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-w2rvq" event={"ID":"fe008faf-7594-433a-90ad-8317cfb54dd2","Type":"ContainerDied","Data":"22498c1fda416bd3524afea326cdae7ad27cd4444d55457c9ee69fdf6e7a00db"} Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.693013 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22498c1fda416bd3524afea326cdae7ad27cd4444d55457c9ee69fdf6e7a00db" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.692654 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-w2rvq" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.694219 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6w89b" event={"ID":"8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6","Type":"ContainerDied","Data":"3ec78ff71758fc24ecbe97a92ef2ac907503f72cd3b372b5b764c9060348a821"} Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.694255 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ec78ff71758fc24ecbe97a92ef2ac907503f72cd3b372b5b764c9060348a821" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.694315 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6w89b" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.696267 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8633-account-create-xgxhq" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.696590 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8633-account-create-xgxhq" event={"ID":"a9017e12-9ea0-4e50-9723-980a39a62146","Type":"ContainerDied","Data":"690855d94e48790ddc08be66c5a249234c9e67d492a2ee6a939071520e919d37"} Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.696639 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="690855d94e48790ddc08be66c5a249234c9e67d492a2ee6a939071520e919d37" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.697965 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6ht94" event={"ID":"bbff70dc-2806-4b63-abc3-f4e5f69babe1","Type":"ContainerDied","Data":"ca44c35190354a476ba562ca9ceb5fff35b7c9e6c5873ca5e914ec1b963977f2"} Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.698004 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca44c35190354a476ba562ca9ceb5fff35b7c9e6c5873ca5e914ec1b963977f2" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.697981 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6ht94" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.699346 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.699403 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a04b-account-create-5kp5p" event={"ID":"060e3ec5-bc92-41ba-be28-81705247ed9f","Type":"ContainerDied","Data":"8da091a541640983d440aa0842eec0f3d3d3e024d41c71bae480da04b9b3913a"} Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.699441 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8da091a541640983d440aa0842eec0f3d3d3e024d41c71bae480da04b9b3913a" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.699672 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a04b-account-create-5kp5p" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.748212 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.787450 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.795091 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:38 crc kubenswrapper[4768]: E1124 18:07:38.795956 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="060e3ec5-bc92-41ba-be28-81705247ed9f" containerName="mariadb-account-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.795983 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="060e3ec5-bc92-41ba-be28-81705247ed9f" containerName="mariadb-account-create" Nov 24 18:07:38 crc kubenswrapper[4768]: E1124 18:07:38.796036 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9017e12-9ea0-4e50-9723-980a39a62146" containerName="mariadb-account-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.796047 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9017e12-9ea0-4e50-9723-980a39a62146" containerName="mariadb-account-create" Nov 24 18:07:38 crc kubenswrapper[4768]: E1124 18:07:38.796065 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe008faf-7594-433a-90ad-8317cfb54dd2" containerName="mariadb-database-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.796073 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe008faf-7594-433a-90ad-8317cfb54dd2" containerName="mariadb-database-create" Nov 24 18:07:38 crc kubenswrapper[4768]: E1124 18:07:38.796100 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6" containerName="mariadb-database-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.796111 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6" containerName="mariadb-database-create" Nov 24 18:07:38 crc kubenswrapper[4768]: E1124 18:07:38.796135 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc00587-656d-47e3-bfa1-a722e4a72f2c" containerName="mariadb-account-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.796143 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc00587-656d-47e3-bfa1-a722e4a72f2c" containerName="mariadb-account-create" Nov 24 18:07:38 crc kubenswrapper[4768]: E1124 18:07:38.796157 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbff70dc-2806-4b63-abc3-f4e5f69babe1" containerName="mariadb-database-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.796166 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbff70dc-2806-4b63-abc3-f4e5f69babe1" containerName="mariadb-database-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.796894 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbff70dc-2806-4b63-abc3-f4e5f69babe1" containerName="mariadb-database-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.796937 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc00587-656d-47e3-bfa1-a722e4a72f2c" containerName="mariadb-account-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.796963 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6" containerName="mariadb-database-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.796979 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="060e3ec5-bc92-41ba-be28-81705247ed9f" containerName="mariadb-account-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.797003 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9017e12-9ea0-4e50-9723-980a39a62146" containerName="mariadb-account-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.797013 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe008faf-7594-433a-90ad-8317cfb54dd2" containerName="mariadb-database-create" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.801084 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.803538 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.804578 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.806680 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.943531 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-scripts\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.943665 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.943733 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.944042 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-config-data\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.944110 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4aba723-2648-44d9-863d-56f7a0803996-log-httpd\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.944161 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4aba723-2648-44d9-863d-56f7a0803996-run-httpd\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:38 crc kubenswrapper[4768]: I1124 18:07:38.944231 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzh6x\" (UniqueName: \"kubernetes.io/projected/a4aba723-2648-44d9-863d-56f7a0803996-kube-api-access-bzh6x\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.046098 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-config-data\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.046139 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4aba723-2648-44d9-863d-56f7a0803996-log-httpd\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.046161 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4aba723-2648-44d9-863d-56f7a0803996-run-httpd\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.046190 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzh6x\" (UniqueName: \"kubernetes.io/projected/a4aba723-2648-44d9-863d-56f7a0803996-kube-api-access-bzh6x\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.046215 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-scripts\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.046249 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.046275 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.047292 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4aba723-2648-44d9-863d-56f7a0803996-run-httpd\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.047322 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4aba723-2648-44d9-863d-56f7a0803996-log-httpd\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.051419 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.055700 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-scripts\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.056078 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-config-data\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.056567 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.065873 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzh6x\" (UniqueName: \"kubernetes.io/projected/a4aba723-2648-44d9-863d-56f7a0803996-kube-api-access-bzh6x\") pod \"ceilometer-0\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.121752 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.558892 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.708026 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4aba723-2648-44d9-863d-56f7a0803996","Type":"ContainerStarted","Data":"e6625e7bfc19b967949da7bf632c0dd58c004f241d419fa1ef471870f580e2cd"} Nov 24 18:07:39 crc kubenswrapper[4768]: I1124 18:07:39.909475 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd6e81df-703b-41cc-853c-3c1257786d5c" path="/var/lib/kubelet/pods/fd6e81df-703b-41cc-853c-3c1257786d5c/volumes" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.028331 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.283231 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc00587_656d_47e3_bfa1_a722e4a72f2c.slice/crio-conmon-4948506a3e29d8d58f7d562879c95934f1cc8eb7d97e904510f8729bc30821ad.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc00587_656d_47e3_bfa1_a722e4a72f2c.slice/crio-conmon-4948506a3e29d8d58f7d562879c95934f1cc8eb7d97e904510f8729bc30821ad.scope: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.283325 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc00587_656d_47e3_bfa1_a722e4a72f2c.slice/crio-4948506a3e29d8d58f7d562879c95934f1cc8eb7d97e904510f8729bc30821ad.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc00587_656d_47e3_bfa1_a722e4a72f2c.slice/crio-4948506a3e29d8d58f7d562879c95934f1cc8eb7d97e904510f8729bc30821ad.scope: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.283347 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbff70dc_2806_4b63_abc3_f4e5f69babe1.slice/crio-ca44c35190354a476ba562ca9ceb5fff35b7c9e6c5873ca5e914ec1b963977f2": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbff70dc_2806_4b63_abc3_f4e5f69babe1.slice/crio-ca44c35190354a476ba562ca9ceb5fff35b7c9e6c5873ca5e914ec1b963977f2: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.283363 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060e3ec5_bc92_41ba_be28_81705247ed9f.slice/crio-8da091a541640983d440aa0842eec0f3d3d3e024d41c71bae480da04b9b3913a": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060e3ec5_bc92_41ba_be28_81705247ed9f.slice/crio-8da091a541640983d440aa0842eec0f3d3d3e024d41c71bae480da04b9b3913a: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.283381 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe008faf_7594_433a_90ad_8317cfb54dd2.slice/crio-22498c1fda416bd3524afea326cdae7ad27cd4444d55457c9ee69fdf6e7a00db": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe008faf_7594_433a_90ad_8317cfb54dd2.slice/crio-22498c1fda416bd3524afea326cdae7ad27cd4444d55457c9ee69fdf6e7a00db: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.285845 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbff70dc_2806_4b63_abc3_f4e5f69babe1.slice/crio-conmon-adc5dccce10a7476d8ae5455c617731a4f73e5ae2a8e3d95dae4aaaa2a8365e2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbff70dc_2806_4b63_abc3_f4e5f69babe1.slice/crio-conmon-adc5dccce10a7476d8ae5455c617731a4f73e5ae2a8e3d95dae4aaaa2a8365e2.scope: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.285875 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe008faf_7594_433a_90ad_8317cfb54dd2.slice/crio-conmon-da3a805140c40a4c232a97e19b78bf8c4b43f7dc00c8e3187d6435086676ad45.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe008faf_7594_433a_90ad_8317cfb54dd2.slice/crio-conmon-da3a805140c40a4c232a97e19b78bf8c4b43f7dc00c8e3187d6435086676ad45.scope: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.285891 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbff70dc_2806_4b63_abc3_f4e5f69babe1.slice/crio-adc5dccce10a7476d8ae5455c617731a4f73e5ae2a8e3d95dae4aaaa2a8365e2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbff70dc_2806_4b63_abc3_f4e5f69babe1.slice/crio-adc5dccce10a7476d8ae5455c617731a4f73e5ae2a8e3d95dae4aaaa2a8365e2.scope: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.285906 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe008faf_7594_433a_90ad_8317cfb54dd2.slice/crio-da3a805140c40a4c232a97e19b78bf8c4b43f7dc00c8e3187d6435086676ad45.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe008faf_7594_433a_90ad_8317cfb54dd2.slice/crio-da3a805140c40a4c232a97e19b78bf8c4b43f7dc00c8e3187d6435086676ad45.scope: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.285924 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9017e12_9ea0_4e50_9723_980a39a62146.slice/crio-690855d94e48790ddc08be66c5a249234c9e67d492a2ee6a939071520e919d37": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9017e12_9ea0_4e50_9723_980a39a62146.slice/crio-690855d94e48790ddc08be66c5a249234c9e67d492a2ee6a939071520e919d37: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.285940 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8410c6fc_2a1a_4c46_bd1b_ce4b923abaa6.slice/crio-3ec78ff71758fc24ecbe97a92ef2ac907503f72cd3b372b5b764c9060348a821": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8410c6fc_2a1a_4c46_bd1b_ce4b923abaa6.slice/crio-3ec78ff71758fc24ecbe97a92ef2ac907503f72cd3b372b5b764c9060348a821: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.285954 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060e3ec5_bc92_41ba_be28_81705247ed9f.slice/crio-conmon-b183fabfb27b7a8a35767088d942edbcf0f62f3af619354c9815de2624af25ac.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060e3ec5_bc92_41ba_be28_81705247ed9f.slice/crio-conmon-b183fabfb27b7a8a35767088d942edbcf0f62f3af619354c9815de2624af25ac.scope: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.285968 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060e3ec5_bc92_41ba_be28_81705247ed9f.slice/crio-b183fabfb27b7a8a35767088d942edbcf0f62f3af619354c9815de2624af25ac.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060e3ec5_bc92_41ba_be28_81705247ed9f.slice/crio-b183fabfb27b7a8a35767088d942edbcf0f62f3af619354c9815de2624af25ac.scope: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.285986 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8410c6fc_2a1a_4c46_bd1b_ce4b923abaa6.slice/crio-conmon-29b4af4f0909cde5307e0e956fb1ef96b0bd1b9c913015bbf5519ff9ea20f89b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8410c6fc_2a1a_4c46_bd1b_ce4b923abaa6.slice/crio-conmon-29b4af4f0909cde5307e0e956fb1ef96b0bd1b9c913015bbf5519ff9ea20f89b.scope: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.288569 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9017e12_9ea0_4e50_9723_980a39a62146.slice/crio-conmon-f403f3cc33f9293e30e21fda3d14e53ba866dc2363087a278f538ee139276367.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9017e12_9ea0_4e50_9723_980a39a62146.slice/crio-conmon-f403f3cc33f9293e30e21fda3d14e53ba866dc2363087a278f538ee139276367.scope: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.288608 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8410c6fc_2a1a_4c46_bd1b_ce4b923abaa6.slice/crio-29b4af4f0909cde5307e0e956fb1ef96b0bd1b9c913015bbf5519ff9ea20f89b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8410c6fc_2a1a_4c46_bd1b_ce4b923abaa6.slice/crio-29b4af4f0909cde5307e0e956fb1ef96b0bd1b9c913015bbf5519ff9ea20f89b.scope: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.288715 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9017e12_9ea0_4e50_9723_980a39a62146.slice/crio-f403f3cc33f9293e30e21fda3d14e53ba866dc2363087a278f538ee139276367.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9017e12_9ea0_4e50_9723_980a39a62146.slice/crio-f403f3cc33f9293e30e21fda3d14e53ba866dc2363087a278f538ee139276367.scope: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: W1124 18:07:40.303326 4768 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd6e81df_703b_41cc_853c_3c1257786d5c.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd6e81df_703b_41cc_853c_3c1257786d5c.slice: no such file or directory Nov 24 18:07:40 crc kubenswrapper[4768]: E1124 18:07:40.551913 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e955ebb_07b3_4997_b373_7e39827a2d90.slice/crio-conmon-e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633.scope\": RecentStats: unable to find data in memory cache]" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.713134 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.724128 4768 generic.go:334] "Generic (PLEG): container finished" podID="2e955ebb-07b3-4997-b373-7e39827a2d90" containerID="e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633" exitCode=137 Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.724171 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e955ebb-07b3-4997-b373-7e39827a2d90","Type":"ContainerDied","Data":"e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633"} Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.724204 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e955ebb-07b3-4997-b373-7e39827a2d90","Type":"ContainerDied","Data":"2760831b1e1b556db40c42c360eddc0d51ea77f4a9bdce922243fc396cdf4122"} Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.724229 4768 scope.go:117] "RemoveContainer" containerID="e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.724391 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.755802 4768 scope.go:117] "RemoveContainer" containerID="6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.773341 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-config-data\") pod \"2e955ebb-07b3-4997-b373-7e39827a2d90\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.773406 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwz5b\" (UniqueName: \"kubernetes.io/projected/2e955ebb-07b3-4997-b373-7e39827a2d90-kube-api-access-kwz5b\") pod \"2e955ebb-07b3-4997-b373-7e39827a2d90\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.773501 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e955ebb-07b3-4997-b373-7e39827a2d90-logs\") pod \"2e955ebb-07b3-4997-b373-7e39827a2d90\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.773548 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-config-data-custom\") pod \"2e955ebb-07b3-4997-b373-7e39827a2d90\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.773582 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-combined-ca-bundle\") pod \"2e955ebb-07b3-4997-b373-7e39827a2d90\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.773659 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e955ebb-07b3-4997-b373-7e39827a2d90-etc-machine-id\") pod \"2e955ebb-07b3-4997-b373-7e39827a2d90\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.773703 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-scripts\") pod \"2e955ebb-07b3-4997-b373-7e39827a2d90\" (UID: \"2e955ebb-07b3-4997-b373-7e39827a2d90\") " Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.774432 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e955ebb-07b3-4997-b373-7e39827a2d90-logs" (OuterVolumeSpecName: "logs") pod "2e955ebb-07b3-4997-b373-7e39827a2d90" (UID: "2e955ebb-07b3-4997-b373-7e39827a2d90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.775181 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e955ebb-07b3-4997-b373-7e39827a2d90-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2e955ebb-07b3-4997-b373-7e39827a2d90" (UID: "2e955ebb-07b3-4997-b373-7e39827a2d90"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.783926 4768 scope.go:117] "RemoveContainer" containerID="e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.783965 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e955ebb-07b3-4997-b373-7e39827a2d90-kube-api-access-kwz5b" (OuterVolumeSpecName: "kube-api-access-kwz5b") pod "2e955ebb-07b3-4997-b373-7e39827a2d90" (UID: "2e955ebb-07b3-4997-b373-7e39827a2d90"). InnerVolumeSpecName "kube-api-access-kwz5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.784050 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-scripts" (OuterVolumeSpecName: "scripts") pod "2e955ebb-07b3-4997-b373-7e39827a2d90" (UID: "2e955ebb-07b3-4997-b373-7e39827a2d90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:40 crc kubenswrapper[4768]: E1124 18:07:40.786872 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633\": container with ID starting with e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633 not found: ID does not exist" containerID="e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.786904 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633"} err="failed to get container status \"e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633\": rpc error: code = NotFound desc = could not find container \"e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633\": container with ID starting with e9be410a7f7e645060ba22e972ce0b41b0b8c606b6f2a91994fa6a38e9148633 not found: ID does not exist" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.786930 4768 scope.go:117] "RemoveContainer" containerID="6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.786981 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2e955ebb-07b3-4997-b373-7e39827a2d90" (UID: "2e955ebb-07b3-4997-b373-7e39827a2d90"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:40 crc kubenswrapper[4768]: E1124 18:07:40.788303 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49\": container with ID starting with 6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49 not found: ID does not exist" containerID="6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.788339 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49"} err="failed to get container status \"6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49\": rpc error: code = NotFound desc = could not find container \"6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49\": container with ID starting with 6afc50cf8abe60f92de11ea83a4690bdad3d1b2130ea07e37b611d9f9fcd4b49 not found: ID does not exist" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.826567 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e955ebb-07b3-4997-b373-7e39827a2d90" (UID: "2e955ebb-07b3-4997-b373-7e39827a2d90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.871373 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-config-data" (OuterVolumeSpecName: "config-data") pod "2e955ebb-07b3-4997-b373-7e39827a2d90" (UID: "2e955ebb-07b3-4997-b373-7e39827a2d90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.875697 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e955ebb-07b3-4997-b373-7e39827a2d90-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.875725 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.875735 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.875745 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwz5b\" (UniqueName: \"kubernetes.io/projected/2e955ebb-07b3-4997-b373-7e39827a2d90-kube-api-access-kwz5b\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.875755 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e955ebb-07b3-4997-b373-7e39827a2d90-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.875762 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:40 crc kubenswrapper[4768]: I1124 18:07:40.875770 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e955ebb-07b3-4997-b373-7e39827a2d90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.055829 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.065228 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.073260 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 18:07:41 crc kubenswrapper[4768]: E1124 18:07:41.073632 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e955ebb-07b3-4997-b373-7e39827a2d90" containerName="cinder-api-log" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.073650 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e955ebb-07b3-4997-b373-7e39827a2d90" containerName="cinder-api-log" Nov 24 18:07:41 crc kubenswrapper[4768]: E1124 18:07:41.073673 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e955ebb-07b3-4997-b373-7e39827a2d90" containerName="cinder-api" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.073679 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e955ebb-07b3-4997-b373-7e39827a2d90" containerName="cinder-api" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.073827 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e955ebb-07b3-4997-b373-7e39827a2d90" containerName="cinder-api" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.073846 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e955ebb-07b3-4997-b373-7e39827a2d90" containerName="cinder-api-log" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.074678 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.082572 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.082917 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.083402 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.092712 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.184714 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.184771 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.184817 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-logs\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.185253 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.185525 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skhq6\" (UniqueName: \"kubernetes.io/projected/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-kube-api-access-skhq6\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.185676 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-config-data-custom\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.185714 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-config-data\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.185794 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-scripts\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.185869 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.287895 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.287953 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.288005 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-logs\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.288087 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.288150 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skhq6\" (UniqueName: \"kubernetes.io/projected/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-kube-api-access-skhq6\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.288191 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-config-data-custom\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.288212 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-config-data\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.288238 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-scripts\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.288259 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.288377 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.288857 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-logs\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.292452 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-scripts\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.292897 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-config-data\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.292972 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.293168 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.293880 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.300250 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-config-data-custom\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.317922 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skhq6\" (UniqueName: \"kubernetes.io/projected/e6cd1c8b-47af-4035-9e6f-601dd5b94cd3-kube-api-access-skhq6\") pod \"cinder-api-0\" (UID: \"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3\") " pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.397966 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.735818 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4aba723-2648-44d9-863d-56f7a0803996","Type":"ContainerStarted","Data":"42f3607f876c201de4b21124aaf7586a3007e8de12569cba242d0e9850d8636e"} Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.844127 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 18:07:41 crc kubenswrapper[4768]: W1124 18:07:41.848858 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6cd1c8b_47af_4035_9e6f_601dd5b94cd3.slice/crio-69d4cad897b6d24b5863c23ddbc94690c747a0df5736dd99f25cbba73a88b04a WatchSource:0}: Error finding container 69d4cad897b6d24b5863c23ddbc94690c747a0df5736dd99f25cbba73a88b04a: Status 404 returned error can't find the container with id 69d4cad897b6d24b5863c23ddbc94690c747a0df5736dd99f25cbba73a88b04a Nov 24 18:07:41 crc kubenswrapper[4768]: I1124 18:07:41.909729 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e955ebb-07b3-4997-b373-7e39827a2d90" path="/var/lib/kubelet/pods/2e955ebb-07b3-4997-b373-7e39827a2d90/volumes" Nov 24 18:07:42 crc kubenswrapper[4768]: I1124 18:07:42.755814 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3","Type":"ContainerStarted","Data":"c162ab577a672edccc4b065d326d64c2e8ff1b98a0d7ec28f7c3f0e25d0035ee"} Nov 24 18:07:42 crc kubenswrapper[4768]: I1124 18:07:42.756541 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3","Type":"ContainerStarted","Data":"69d4cad897b6d24b5863c23ddbc94690c747a0df5736dd99f25cbba73a88b04a"} Nov 24 18:07:43 crc kubenswrapper[4768]: I1124 18:07:43.656810 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:07:43 crc kubenswrapper[4768]: I1124 18:07:43.656897 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:07:43 crc kubenswrapper[4768]: I1124 18:07:43.766222 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e6cd1c8b-47af-4035-9e6f-601dd5b94cd3","Type":"ContainerStarted","Data":"2b2fb4435e3b3f01bc5b64a276cbc7d84778ec85f5f3deed74f46b17b28ff504"} Nov 24 18:07:43 crc kubenswrapper[4768]: I1124 18:07:43.766378 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 18:07:43 crc kubenswrapper[4768]: I1124 18:07:43.785978 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.785952011 podStartE2EDuration="2.785952011s" podCreationTimestamp="2025-11-24 18:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:07:43.782640573 +0000 UTC m=+1102.643222350" watchObservedRunningTime="2025-11-24 18:07:43.785952011 +0000 UTC m=+1102.646533788" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.513132 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cl9zb"] Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.514259 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.516995 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.517029 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.517266 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qw9tm" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.542850 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cl9zb"] Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.655903 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9qwm\" (UniqueName: \"kubernetes.io/projected/d87682ae-914f-4570-9faa-2031bdd70f29-kube-api-access-z9qwm\") pod \"nova-cell0-conductor-db-sync-cl9zb\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.655994 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-scripts\") pod \"nova-cell0-conductor-db-sync-cl9zb\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.656307 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-config-data\") pod \"nova-cell0-conductor-db-sync-cl9zb\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.656572 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cl9zb\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.758736 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cl9zb\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.758857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9qwm\" (UniqueName: \"kubernetes.io/projected/d87682ae-914f-4570-9faa-2031bdd70f29-kube-api-access-z9qwm\") pod \"nova-cell0-conductor-db-sync-cl9zb\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.758916 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-scripts\") pod \"nova-cell0-conductor-db-sync-cl9zb\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.758985 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-config-data\") pod \"nova-cell0-conductor-db-sync-cl9zb\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.764652 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-scripts\") pod \"nova-cell0-conductor-db-sync-cl9zb\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.764932 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-config-data\") pod \"nova-cell0-conductor-db-sync-cl9zb\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.767724 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cl9zb\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.781262 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9qwm\" (UniqueName: \"kubernetes.io/projected/d87682ae-914f-4570-9faa-2031bdd70f29-kube-api-access-z9qwm\") pod \"nova-cell0-conductor-db-sync-cl9zb\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:44 crc kubenswrapper[4768]: I1124 18:07:44.849046 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:07:45 crc kubenswrapper[4768]: I1124 18:07:45.296139 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cl9zb"] Nov 24 18:07:45 crc kubenswrapper[4768]: W1124 18:07:45.300680 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd87682ae_914f_4570_9faa_2031bdd70f29.slice/crio-8e306700bd08222644fa766500114d12045f6bd8477caedfa9f88c254f995c4b WatchSource:0}: Error finding container 8e306700bd08222644fa766500114d12045f6bd8477caedfa9f88c254f995c4b: Status 404 returned error can't find the container with id 8e306700bd08222644fa766500114d12045f6bd8477caedfa9f88c254f995c4b Nov 24 18:07:45 crc kubenswrapper[4768]: I1124 18:07:45.797993 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cl9zb" event={"ID":"d87682ae-914f-4570-9faa-2031bdd70f29","Type":"ContainerStarted","Data":"8e306700bd08222644fa766500114d12045f6bd8477caedfa9f88c254f995c4b"} Nov 24 18:07:46 crc kubenswrapper[4768]: I1124 18:07:46.807514 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4aba723-2648-44d9-863d-56f7a0803996","Type":"ContainerStarted","Data":"04f76ad819d2ffd46544ac608e0fd6e4b573a0baf73b7f7b34c1e0a4780eb124"} Nov 24 18:07:47 crc kubenswrapper[4768]: I1124 18:07:47.820884 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4aba723-2648-44d9-863d-56f7a0803996","Type":"ContainerStarted","Data":"e6281df243864038e84526025e0266cdd9faaf4ffe94a03c602810e302ca6c6f"} Nov 24 18:07:49 crc kubenswrapper[4768]: I1124 18:07:49.843728 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4aba723-2648-44d9-863d-56f7a0803996","Type":"ContainerStarted","Data":"e2669de5e9e2f1e4b4308b718882bb3a4875bfc18480d16f5c89a34c21866d16"} Nov 24 18:07:49 crc kubenswrapper[4768]: I1124 18:07:49.844390 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 18:07:49 crc kubenswrapper[4768]: I1124 18:07:49.843982 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="sg-core" containerID="cri-o://e6281df243864038e84526025e0266cdd9faaf4ffe94a03c602810e302ca6c6f" gracePeriod=30 Nov 24 18:07:49 crc kubenswrapper[4768]: I1124 18:07:49.843852 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="ceilometer-central-agent" containerID="cri-o://42f3607f876c201de4b21124aaf7586a3007e8de12569cba242d0e9850d8636e" gracePeriod=30 Nov 24 18:07:49 crc kubenswrapper[4768]: I1124 18:07:49.843976 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="ceilometer-notification-agent" containerID="cri-o://04f76ad819d2ffd46544ac608e0fd6e4b573a0baf73b7f7b34c1e0a4780eb124" gracePeriod=30 Nov 24 18:07:49 crc kubenswrapper[4768]: I1124 18:07:49.844007 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="proxy-httpd" containerID="cri-o://e2669de5e9e2f1e4b4308b718882bb3a4875bfc18480d16f5c89a34c21866d16" gracePeriod=30 Nov 24 18:07:49 crc kubenswrapper[4768]: I1124 18:07:49.868531 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.786742487 podStartE2EDuration="11.868501005s" podCreationTimestamp="2025-11-24 18:07:38 +0000 UTC" firstStartedPulling="2025-11-24 18:07:39.564678081 +0000 UTC m=+1098.425259868" lastFinishedPulling="2025-11-24 18:07:48.646436619 +0000 UTC m=+1107.507018386" observedRunningTime="2025-11-24 18:07:49.86267887 +0000 UTC m=+1108.723260647" watchObservedRunningTime="2025-11-24 18:07:49.868501005 +0000 UTC m=+1108.729082782" Nov 24 18:07:50 crc kubenswrapper[4768]: I1124 18:07:50.854715 4768 generic.go:334] "Generic (PLEG): container finished" podID="a4aba723-2648-44d9-863d-56f7a0803996" containerID="e2669de5e9e2f1e4b4308b718882bb3a4875bfc18480d16f5c89a34c21866d16" exitCode=0 Nov 24 18:07:50 crc kubenswrapper[4768]: I1124 18:07:50.854788 4768 generic.go:334] "Generic (PLEG): container finished" podID="a4aba723-2648-44d9-863d-56f7a0803996" containerID="e6281df243864038e84526025e0266cdd9faaf4ffe94a03c602810e302ca6c6f" exitCode=2 Nov 24 18:07:50 crc kubenswrapper[4768]: I1124 18:07:50.854765 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4aba723-2648-44d9-863d-56f7a0803996","Type":"ContainerDied","Data":"e2669de5e9e2f1e4b4308b718882bb3a4875bfc18480d16f5c89a34c21866d16"} Nov 24 18:07:50 crc kubenswrapper[4768]: I1124 18:07:50.854857 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4aba723-2648-44d9-863d-56f7a0803996","Type":"ContainerDied","Data":"e6281df243864038e84526025e0266cdd9faaf4ffe94a03c602810e302ca6c6f"} Nov 24 18:07:50 crc kubenswrapper[4768]: I1124 18:07:50.854890 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4aba723-2648-44d9-863d-56f7a0803996","Type":"ContainerDied","Data":"04f76ad819d2ffd46544ac608e0fd6e4b573a0baf73b7f7b34c1e0a4780eb124"} Nov 24 18:07:50 crc kubenswrapper[4768]: I1124 18:07:50.854799 4768 generic.go:334] "Generic (PLEG): container finished" podID="a4aba723-2648-44d9-863d-56f7a0803996" containerID="04f76ad819d2ffd46544ac608e0fd6e4b573a0baf73b7f7b34c1e0a4780eb124" exitCode=0 Nov 24 18:07:50 crc kubenswrapper[4768]: I1124 18:07:50.854970 4768 generic.go:334] "Generic (PLEG): container finished" podID="a4aba723-2648-44d9-863d-56f7a0803996" containerID="42f3607f876c201de4b21124aaf7586a3007e8de12569cba242d0e9850d8636e" exitCode=0 Nov 24 18:07:50 crc kubenswrapper[4768]: I1124 18:07:50.855016 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4aba723-2648-44d9-863d-56f7a0803996","Type":"ContainerDied","Data":"42f3607f876c201de4b21124aaf7586a3007e8de12569cba242d0e9850d8636e"} Nov 24 18:07:53 crc kubenswrapper[4768]: I1124 18:07:53.596108 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.425732 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.480557 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4aba723-2648-44d9-863d-56f7a0803996-log-httpd\") pod \"a4aba723-2648-44d9-863d-56f7a0803996\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.480671 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzh6x\" (UniqueName: \"kubernetes.io/projected/a4aba723-2648-44d9-863d-56f7a0803996-kube-api-access-bzh6x\") pod \"a4aba723-2648-44d9-863d-56f7a0803996\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.480783 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-config-data\") pod \"a4aba723-2648-44d9-863d-56f7a0803996\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.480860 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-combined-ca-bundle\") pod \"a4aba723-2648-44d9-863d-56f7a0803996\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.481052 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4aba723-2648-44d9-863d-56f7a0803996-run-httpd\") pod \"a4aba723-2648-44d9-863d-56f7a0803996\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.481199 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-sg-core-conf-yaml\") pod \"a4aba723-2648-44d9-863d-56f7a0803996\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.481349 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4aba723-2648-44d9-863d-56f7a0803996-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a4aba723-2648-44d9-863d-56f7a0803996" (UID: "a4aba723-2648-44d9-863d-56f7a0803996"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.481542 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4aba723-2648-44d9-863d-56f7a0803996-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a4aba723-2648-44d9-863d-56f7a0803996" (UID: "a4aba723-2648-44d9-863d-56f7a0803996"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.482278 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-scripts\") pod \"a4aba723-2648-44d9-863d-56f7a0803996\" (UID: \"a4aba723-2648-44d9-863d-56f7a0803996\") " Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.483046 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4aba723-2648-44d9-863d-56f7a0803996-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.483154 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4aba723-2648-44d9-863d-56f7a0803996-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.486414 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4aba723-2648-44d9-863d-56f7a0803996-kube-api-access-bzh6x" (OuterVolumeSpecName: "kube-api-access-bzh6x") pod "a4aba723-2648-44d9-863d-56f7a0803996" (UID: "a4aba723-2648-44d9-863d-56f7a0803996"). InnerVolumeSpecName "kube-api-access-bzh6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.498684 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-scripts" (OuterVolumeSpecName: "scripts") pod "a4aba723-2648-44d9-863d-56f7a0803996" (UID: "a4aba723-2648-44d9-863d-56f7a0803996"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.517901 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a4aba723-2648-44d9-863d-56f7a0803996" (UID: "a4aba723-2648-44d9-863d-56f7a0803996"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.562334 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4aba723-2648-44d9-863d-56f7a0803996" (UID: "a4aba723-2648-44d9-863d-56f7a0803996"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.584830 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzh6x\" (UniqueName: \"kubernetes.io/projected/a4aba723-2648-44d9-863d-56f7a0803996-kube-api-access-bzh6x\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.584870 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.584899 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.584910 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.609772 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-config-data" (OuterVolumeSpecName: "config-data") pod "a4aba723-2648-44d9-863d-56f7a0803996" (UID: "a4aba723-2648-44d9-863d-56f7a0803996"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.687308 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4aba723-2648-44d9-863d-56f7a0803996-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.932352 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4aba723-2648-44d9-863d-56f7a0803996","Type":"ContainerDied","Data":"e6625e7bfc19b967949da7bf632c0dd58c004f241d419fa1ef471870f580e2cd"} Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.932866 4768 scope.go:117] "RemoveContainer" containerID="e2669de5e9e2f1e4b4308b718882bb3a4875bfc18480d16f5c89a34c21866d16" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.932453 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.969030 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.980325 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:55 crc kubenswrapper[4768]: I1124 18:07:55.999568 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:56 crc kubenswrapper[4768]: E1124 18:07:56.000221 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="ceilometer-central-agent" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.000324 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="ceilometer-central-agent" Nov 24 18:07:56 crc kubenswrapper[4768]: E1124 18:07:56.000408 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="ceilometer-notification-agent" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.000513 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="ceilometer-notification-agent" Nov 24 18:07:56 crc kubenswrapper[4768]: E1124 18:07:56.000651 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="sg-core" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.000762 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="sg-core" Nov 24 18:07:56 crc kubenswrapper[4768]: E1124 18:07:56.000845 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="proxy-httpd" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.000911 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="proxy-httpd" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.001206 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="proxy-httpd" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.001340 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="sg-core" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.001428 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="ceilometer-notification-agent" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.001559 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4aba723-2648-44d9-863d-56f7a0803996" containerName="ceilometer-central-agent" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.004007 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.006917 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.008012 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.022056 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.095797 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.095903 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nrx6\" (UniqueName: \"kubernetes.io/projected/bb0571fc-4ac1-413b-9253-c3555bdde7b4-kube-api-access-6nrx6\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.095942 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.095981 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0571fc-4ac1-413b-9253-c3555bdde7b4-log-httpd\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.096007 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-config-data\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.096028 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0571fc-4ac1-413b-9253-c3555bdde7b4-run-httpd\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.096064 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-scripts\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.197869 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.197954 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nrx6\" (UniqueName: \"kubernetes.io/projected/bb0571fc-4ac1-413b-9253-c3555bdde7b4-kube-api-access-6nrx6\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.197980 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.198015 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0571fc-4ac1-413b-9253-c3555bdde7b4-log-httpd\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.198042 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-config-data\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.198064 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0571fc-4ac1-413b-9253-c3555bdde7b4-run-httpd\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.198100 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-scripts\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.422006 4768 scope.go:117] "RemoveContainer" containerID="e6281df243864038e84526025e0266cdd9faaf4ffe94a03c602810e302ca6c6f" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.424477 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.486561 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.495863 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0571fc-4ac1-413b-9253-c3555bdde7b4-log-httpd\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.496024 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0571fc-4ac1-413b-9253-c3555bdde7b4-run-httpd\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.496764 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-config-data\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.497179 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-scripts\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.536755 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nrx6\" (UniqueName: \"kubernetes.io/projected/bb0571fc-4ac1-413b-9253-c3555bdde7b4-kube-api-access-6nrx6\") pod \"ceilometer-0\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " pod="openstack/ceilometer-0" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.561346 4768 scope.go:117] "RemoveContainer" containerID="04f76ad819d2ffd46544ac608e0fd6e4b573a0baf73b7f7b34c1e0a4780eb124" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.584602 4768 scope.go:117] "RemoveContainer" containerID="42f3607f876c201de4b21124aaf7586a3007e8de12569cba242d0e9850d8636e" Nov 24 18:07:56 crc kubenswrapper[4768]: I1124 18:07:56.703830 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:07:57 crc kubenswrapper[4768]: I1124 18:07:57.183026 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:07:57 crc kubenswrapper[4768]: W1124 18:07:57.225023 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb0571fc_4ac1_413b_9253_c3555bdde7b4.slice/crio-1f5284a14c05abfa8f87dc1704f534d1c90c9831fecbcd048cce6aa9c41405a9 WatchSource:0}: Error finding container 1f5284a14c05abfa8f87dc1704f534d1c90c9831fecbcd048cce6aa9c41405a9: Status 404 returned error can't find the container with id 1f5284a14c05abfa8f87dc1704f534d1c90c9831fecbcd048cce6aa9c41405a9 Nov 24 18:07:57 crc kubenswrapper[4768]: I1124 18:07:57.908842 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4aba723-2648-44d9-863d-56f7a0803996" path="/var/lib/kubelet/pods/a4aba723-2648-44d9-863d-56f7a0803996/volumes" Nov 24 18:07:57 crc kubenswrapper[4768]: I1124 18:07:57.981213 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cl9zb" event={"ID":"d87682ae-914f-4570-9faa-2031bdd70f29","Type":"ContainerStarted","Data":"063d5fbc59089ff4c162d5f41af969da5242ecc18d9be1f3016a39eca1a84236"} Nov 24 18:07:57 crc kubenswrapper[4768]: I1124 18:07:57.982540 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0571fc-4ac1-413b-9253-c3555bdde7b4","Type":"ContainerStarted","Data":"1f5284a14c05abfa8f87dc1704f534d1c90c9831fecbcd048cce6aa9c41405a9"} Nov 24 18:08:01 crc kubenswrapper[4768]: I1124 18:08:01.031054 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-cl9zb" podStartSLOduration=6.5219820120000005 podStartE2EDuration="17.031025699s" podCreationTimestamp="2025-11-24 18:07:44 +0000 UTC" firstStartedPulling="2025-11-24 18:07:45.304581119 +0000 UTC m=+1104.165162896" lastFinishedPulling="2025-11-24 18:07:55.813624806 +0000 UTC m=+1114.674206583" observedRunningTime="2025-11-24 18:08:01.02767985 +0000 UTC m=+1119.888261647" watchObservedRunningTime="2025-11-24 18:08:01.031025699 +0000 UTC m=+1119.891607476" Nov 24 18:08:02 crc kubenswrapper[4768]: I1124 18:08:02.022274 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0571fc-4ac1-413b-9253-c3555bdde7b4","Type":"ContainerStarted","Data":"5e1a3882422a499a91508cb1c317f32cd3bfb30977f5c09cf2c6aebd3595efb5"} Nov 24 18:08:03 crc kubenswrapper[4768]: I1124 18:08:03.031507 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0571fc-4ac1-413b-9253-c3555bdde7b4","Type":"ContainerStarted","Data":"9726bf9e1c01ef800e9f14da964e0bdd67ab8f8c65ecdfdc4c4c98ea0c068635"} Nov 24 18:08:03 crc kubenswrapper[4768]: I1124 18:08:03.031548 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0571fc-4ac1-413b-9253-c3555bdde7b4","Type":"ContainerStarted","Data":"3bd25861c7e572873c4e4e5eeac3d655d08078657de6165f869cab05b383dd6c"} Nov 24 18:08:05 crc kubenswrapper[4768]: I1124 18:08:05.051855 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0571fc-4ac1-413b-9253-c3555bdde7b4","Type":"ContainerStarted","Data":"92e09ff839a9ebf93d6a1a1bd626ab8da9d1cf362542ff2a5105bc486e6ebd3e"} Nov 24 18:08:05 crc kubenswrapper[4768]: I1124 18:08:05.054592 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 18:08:05 crc kubenswrapper[4768]: I1124 18:08:05.077807 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.22659517 podStartE2EDuration="10.077779691s" podCreationTimestamp="2025-11-24 18:07:55 +0000 UTC" firstStartedPulling="2025-11-24 18:07:57.227283354 +0000 UTC m=+1116.087865121" lastFinishedPulling="2025-11-24 18:08:04.078467865 +0000 UTC m=+1122.939049642" observedRunningTime="2025-11-24 18:08:05.076294802 +0000 UTC m=+1123.936876599" watchObservedRunningTime="2025-11-24 18:08:05.077779691 +0000 UTC m=+1123.938361468" Nov 24 18:08:13 crc kubenswrapper[4768]: I1124 18:08:13.656283 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:08:13 crc kubenswrapper[4768]: I1124 18:08:13.656887 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:08:14 crc kubenswrapper[4768]: I1124 18:08:14.138608 4768 generic.go:334] "Generic (PLEG): container finished" podID="d87682ae-914f-4570-9faa-2031bdd70f29" containerID="063d5fbc59089ff4c162d5f41af969da5242ecc18d9be1f3016a39eca1a84236" exitCode=0 Nov 24 18:08:14 crc kubenswrapper[4768]: I1124 18:08:14.138706 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cl9zb" event={"ID":"d87682ae-914f-4570-9faa-2031bdd70f29","Type":"ContainerDied","Data":"063d5fbc59089ff4c162d5f41af969da5242ecc18d9be1f3016a39eca1a84236"} Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.477993 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.576158 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-scripts\") pod \"d87682ae-914f-4570-9faa-2031bdd70f29\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.576235 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9qwm\" (UniqueName: \"kubernetes.io/projected/d87682ae-914f-4570-9faa-2031bdd70f29-kube-api-access-z9qwm\") pod \"d87682ae-914f-4570-9faa-2031bdd70f29\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.576297 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-combined-ca-bundle\") pod \"d87682ae-914f-4570-9faa-2031bdd70f29\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.576474 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-config-data\") pod \"d87682ae-914f-4570-9faa-2031bdd70f29\" (UID: \"d87682ae-914f-4570-9faa-2031bdd70f29\") " Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.581972 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87682ae-914f-4570-9faa-2031bdd70f29-kube-api-access-z9qwm" (OuterVolumeSpecName: "kube-api-access-z9qwm") pod "d87682ae-914f-4570-9faa-2031bdd70f29" (UID: "d87682ae-914f-4570-9faa-2031bdd70f29"). InnerVolumeSpecName "kube-api-access-z9qwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.582085 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-scripts" (OuterVolumeSpecName: "scripts") pod "d87682ae-914f-4570-9faa-2031bdd70f29" (UID: "d87682ae-914f-4570-9faa-2031bdd70f29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.604950 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-config-data" (OuterVolumeSpecName: "config-data") pod "d87682ae-914f-4570-9faa-2031bdd70f29" (UID: "d87682ae-914f-4570-9faa-2031bdd70f29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.606186 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d87682ae-914f-4570-9faa-2031bdd70f29" (UID: "d87682ae-914f-4570-9faa-2031bdd70f29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.679048 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.679084 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9qwm\" (UniqueName: \"kubernetes.io/projected/d87682ae-914f-4570-9faa-2031bdd70f29-kube-api-access-z9qwm\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.679097 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:15 crc kubenswrapper[4768]: I1124 18:08:15.679106 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87682ae-914f-4570-9faa-2031bdd70f29-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.162784 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cl9zb" event={"ID":"d87682ae-914f-4570-9faa-2031bdd70f29","Type":"ContainerDied","Data":"8e306700bd08222644fa766500114d12045f6bd8477caedfa9f88c254f995c4b"} Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.162823 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e306700bd08222644fa766500114d12045f6bd8477caedfa9f88c254f995c4b" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.162890 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cl9zb" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.257431 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 18:08:16 crc kubenswrapper[4768]: E1124 18:08:16.257846 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87682ae-914f-4570-9faa-2031bdd70f29" containerName="nova-cell0-conductor-db-sync" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.257863 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87682ae-914f-4570-9faa-2031bdd70f29" containerName="nova-cell0-conductor-db-sync" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.258027 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87682ae-914f-4570-9faa-2031bdd70f29" containerName="nova-cell0-conductor-db-sync" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.258630 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.260984 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.261275 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qw9tm" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.273739 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.287587 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1cfe70-c0e5-4191-8605-c57257bfef1f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ae1cfe70-c0e5-4191-8605-c57257bfef1f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.287955 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1cfe70-c0e5-4191-8605-c57257bfef1f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ae1cfe70-c0e5-4191-8605-c57257bfef1f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.287994 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vldqh\" (UniqueName: \"kubernetes.io/projected/ae1cfe70-c0e5-4191-8605-c57257bfef1f-kube-api-access-vldqh\") pod \"nova-cell0-conductor-0\" (UID: \"ae1cfe70-c0e5-4191-8605-c57257bfef1f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.389806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1cfe70-c0e5-4191-8605-c57257bfef1f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ae1cfe70-c0e5-4191-8605-c57257bfef1f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.390123 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1cfe70-c0e5-4191-8605-c57257bfef1f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ae1cfe70-c0e5-4191-8605-c57257bfef1f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.390253 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vldqh\" (UniqueName: \"kubernetes.io/projected/ae1cfe70-c0e5-4191-8605-c57257bfef1f-kube-api-access-vldqh\") pod \"nova-cell0-conductor-0\" (UID: \"ae1cfe70-c0e5-4191-8605-c57257bfef1f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.393389 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1cfe70-c0e5-4191-8605-c57257bfef1f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ae1cfe70-c0e5-4191-8605-c57257bfef1f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.394110 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1cfe70-c0e5-4191-8605-c57257bfef1f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ae1cfe70-c0e5-4191-8605-c57257bfef1f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.407675 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vldqh\" (UniqueName: \"kubernetes.io/projected/ae1cfe70-c0e5-4191-8605-c57257bfef1f-kube-api-access-vldqh\") pod \"nova-cell0-conductor-0\" (UID: \"ae1cfe70-c0e5-4191-8605-c57257bfef1f\") " pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:16 crc kubenswrapper[4768]: I1124 18:08:16.575121 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:17 crc kubenswrapper[4768]: I1124 18:08:17.046268 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 18:08:17 crc kubenswrapper[4768]: I1124 18:08:17.173092 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ae1cfe70-c0e5-4191-8605-c57257bfef1f","Type":"ContainerStarted","Data":"4885d4afc2ed83a3538022b3176c3e1dbd6d5af590ed661d51a1f312b6cbf3f0"} Nov 24 18:08:18 crc kubenswrapper[4768]: I1124 18:08:18.183659 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ae1cfe70-c0e5-4191-8605-c57257bfef1f","Type":"ContainerStarted","Data":"38eb93376d7a54956688ec00f21636d7e1af62ad9cb97b5c6a61d8fe30b97789"} Nov 24 18:08:18 crc kubenswrapper[4768]: I1124 18:08:18.184804 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:18 crc kubenswrapper[4768]: I1124 18:08:18.208331 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.208314345 podStartE2EDuration="2.208314345s" podCreationTimestamp="2025-11-24 18:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:08:18.205274464 +0000 UTC m=+1137.065856291" watchObservedRunningTime="2025-11-24 18:08:18.208314345 +0000 UTC m=+1137.068896112" Nov 24 18:08:26 crc kubenswrapper[4768]: I1124 18:08:26.610717 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 18:08:26 crc kubenswrapper[4768]: I1124 18:08:26.709864 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.139925 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qgtgz"] Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.141222 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.145166 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.145314 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.155588 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qgtgz"] Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.297551 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.299277 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.300439 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw699\" (UniqueName: \"kubernetes.io/projected/8def680f-a48e-4b0f-9941-0cbb8a626206-kube-api-access-mw699\") pod \"nova-cell0-cell-mapping-qgtgz\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.300515 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-config-data\") pod \"nova-cell0-cell-mapping-qgtgz\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.300534 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-scripts\") pod \"nova-cell0-cell-mapping-qgtgz\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.300548 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qgtgz\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.303241 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.312313 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.401919 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw699\" (UniqueName: \"kubernetes.io/projected/8def680f-a48e-4b0f-9941-0cbb8a626206-kube-api-access-mw699\") pod \"nova-cell0-cell-mapping-qgtgz\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.402010 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-config-data\") pod \"nova-cell0-cell-mapping-qgtgz\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.402035 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-scripts\") pod \"nova-cell0-cell-mapping-qgtgz\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.402055 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qgtgz\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.402099 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-config-data\") pod \"nova-api-0\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.403320 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.403364 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-logs\") pod \"nova-api-0\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.403394 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9ls5\" (UniqueName: \"kubernetes.io/projected/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-kube-api-access-n9ls5\") pod \"nova-api-0\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.410902 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.412778 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-scripts\") pod \"nova-cell0-cell-mapping-qgtgz\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.413704 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.426324 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.428506 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-config-data\") pod \"nova-cell0-cell-mapping-qgtgz\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.428639 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qgtgz\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.435208 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw699\" (UniqueName: \"kubernetes.io/projected/8def680f-a48e-4b0f-9941-0cbb8a626206-kube-api-access-mw699\") pod \"nova-cell0-cell-mapping-qgtgz\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.459925 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.460551 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.504441 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.504516 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-logs\") pod \"nova-api-0\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.504542 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9ls5\" (UniqueName: \"kubernetes.io/projected/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-kube-api-access-n9ls5\") pod \"nova-api-0\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.504643 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-config-data\") pod \"nova-api-0\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.506004 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-logs\") pod \"nova-api-0\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.522125 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.528921 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-config-data\") pod \"nova-api-0\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.549632 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-fltgw"] Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.551068 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.561587 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9ls5\" (UniqueName: \"kubernetes.io/projected/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-kube-api-access-n9ls5\") pod \"nova-api-0\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.562012 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.563355 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.572632 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.587708 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-fltgw"] Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.606652 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.606711 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-config-data\") pod \"nova-metadata-0\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.606800 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-logs\") pod \"nova-metadata-0\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.606826 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6g8d\" (UniqueName: \"kubernetes.io/projected/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-kube-api-access-w6g8d\") pod \"nova-metadata-0\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.618075 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.630538 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.693078 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.699800 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.704409 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.710393 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-config\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.710446 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-logs\") pod \"nova-metadata-0\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.710470 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6g8d\" (UniqueName: \"kubernetes.io/projected/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-kube-api-access-w6g8d\") pod \"nova-metadata-0\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.710504 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.710551 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.710629 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-dns-svc\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.710646 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/435fe835-6f0c-426f-bc16-ec940fba83b0-config-data\") pod \"nova-scheduler-0\" (UID: \"435fe835-6f0c-426f-bc16-ec940fba83b0\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.710688 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pcnr\" (UniqueName: \"kubernetes.io/projected/435fe835-6f0c-426f-bc16-ec940fba83b0-kube-api-access-8pcnr\") pod \"nova-scheduler-0\" (UID: \"435fe835-6f0c-426f-bc16-ec940fba83b0\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.710707 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.714777 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-logs\") pod \"nova-metadata-0\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.714934 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-config-data\") pod \"nova-metadata-0\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.715864 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6r7l\" (UniqueName: \"kubernetes.io/projected/46501d01-d421-402b-889b-6135c2c8ef8a-kube-api-access-r6r7l\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.715909 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435fe835-6f0c-426f-bc16-ec940fba83b0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"435fe835-6f0c-426f-bc16-ec940fba83b0\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.730706 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.736327 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.738598 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6g8d\" (UniqueName: \"kubernetes.io/projected/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-kube-api-access-w6g8d\") pod \"nova-metadata-0\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.739459 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-config-data\") pod \"nova-metadata-0\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.823643 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b18e38-4235-4db0-a265-a985463b5d5e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"33b18e38-4235-4db0-a265-a985463b5d5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.823696 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-config\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.823719 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.823742 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b18e38-4235-4db0-a265-a985463b5d5e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"33b18e38-4235-4db0-a265-a985463b5d5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.823781 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.823837 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-dns-svc\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.823852 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/435fe835-6f0c-426f-bc16-ec940fba83b0-config-data\") pod \"nova-scheduler-0\" (UID: \"435fe835-6f0c-426f-bc16-ec940fba83b0\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.823875 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pcnr\" (UniqueName: \"kubernetes.io/projected/435fe835-6f0c-426f-bc16-ec940fba83b0-kube-api-access-8pcnr\") pod \"nova-scheduler-0\" (UID: \"435fe835-6f0c-426f-bc16-ec940fba83b0\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.823900 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdvs5\" (UniqueName: \"kubernetes.io/projected/33b18e38-4235-4db0-a265-a985463b5d5e-kube-api-access-rdvs5\") pod \"nova-cell1-novncproxy-0\" (UID: \"33b18e38-4235-4db0-a265-a985463b5d5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.823931 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6r7l\" (UniqueName: \"kubernetes.io/projected/46501d01-d421-402b-889b-6135c2c8ef8a-kube-api-access-r6r7l\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.823949 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435fe835-6f0c-426f-bc16-ec940fba83b0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"435fe835-6f0c-426f-bc16-ec940fba83b0\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.825237 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-config\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.825237 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.825458 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-dns-svc\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.825679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.827768 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435fe835-6f0c-426f-bc16-ec940fba83b0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"435fe835-6f0c-426f-bc16-ec940fba83b0\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.832466 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/435fe835-6f0c-426f-bc16-ec940fba83b0-config-data\") pod \"nova-scheduler-0\" (UID: \"435fe835-6f0c-426f-bc16-ec940fba83b0\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.843939 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6r7l\" (UniqueName: \"kubernetes.io/projected/46501d01-d421-402b-889b-6135c2c8ef8a-kube-api-access-r6r7l\") pod \"dnsmasq-dns-566b5b7845-fltgw\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.846211 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pcnr\" (UniqueName: \"kubernetes.io/projected/435fe835-6f0c-426f-bc16-ec940fba83b0-kube-api-access-8pcnr\") pod \"nova-scheduler-0\" (UID: \"435fe835-6f0c-426f-bc16-ec940fba83b0\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.926184 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b18e38-4235-4db0-a265-a985463b5d5e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"33b18e38-4235-4db0-a265-a985463b5d5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.926263 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b18e38-4235-4db0-a265-a985463b5d5e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"33b18e38-4235-4db0-a265-a985463b5d5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.926411 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdvs5\" (UniqueName: \"kubernetes.io/projected/33b18e38-4235-4db0-a265-a985463b5d5e-kube-api-access-rdvs5\") pod \"nova-cell1-novncproxy-0\" (UID: \"33b18e38-4235-4db0-a265-a985463b5d5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.929840 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.933253 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b18e38-4235-4db0-a265-a985463b5d5e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"33b18e38-4235-4db0-a265-a985463b5d5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.942934 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b18e38-4235-4db0-a265-a985463b5d5e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"33b18e38-4235-4db0-a265-a985463b5d5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.956003 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdvs5\" (UniqueName: \"kubernetes.io/projected/33b18e38-4235-4db0-a265-a985463b5d5e-kube-api-access-rdvs5\") pod \"nova-cell1-novncproxy-0\" (UID: \"33b18e38-4235-4db0-a265-a985463b5d5e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:08:27 crc kubenswrapper[4768]: I1124 18:08:27.987197 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.011655 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.037443 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.061922 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.127813 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qgtgz"] Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.307186 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qgtgz" event={"ID":"8def680f-a48e-4b0f-9941-0cbb8a626206","Type":"ContainerStarted","Data":"b84745f8f9dd9dc0b17ebeba424e6b20ad02da3adb1c88d22fadc1aa986e07c1"} Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.309570 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8","Type":"ContainerStarted","Data":"1f76e12b847bb9af6cf41c00eae9c5275e33555c64e36fc3b5f2ae3798859981"} Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.357428 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7rkgz"] Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.358638 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.371315 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.374077 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.408726 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7rkgz"] Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.444681 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-config-data\") pod \"nova-cell1-conductor-db-sync-7rkgz\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.445457 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7rkgz\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.445589 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v47lg\" (UniqueName: \"kubernetes.io/projected/304a3869-b79e-47bc-ad78-0a4a41868b4f-kube-api-access-v47lg\") pod \"nova-cell1-conductor-db-sync-7rkgz\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.445615 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-scripts\") pod \"nova-cell1-conductor-db-sync-7rkgz\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.489075 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.549984 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-config-data\") pod \"nova-cell1-conductor-db-sync-7rkgz\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.550186 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7rkgz\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.550300 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v47lg\" (UniqueName: \"kubernetes.io/projected/304a3869-b79e-47bc-ad78-0a4a41868b4f-kube-api-access-v47lg\") pod \"nova-cell1-conductor-db-sync-7rkgz\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.550331 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-scripts\") pod \"nova-cell1-conductor-db-sync-7rkgz\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.557974 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-scripts\") pod \"nova-cell1-conductor-db-sync-7rkgz\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.565165 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-config-data\") pod \"nova-cell1-conductor-db-sync-7rkgz\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.565231 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7rkgz\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.568132 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v47lg\" (UniqueName: \"kubernetes.io/projected/304a3869-b79e-47bc-ad78-0a4a41868b4f-kube-api-access-v47lg\") pod \"nova-cell1-conductor-db-sync-7rkgz\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.611518 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-fltgw"] Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.699464 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.764616 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 18:08:28 crc kubenswrapper[4768]: I1124 18:08:28.789425 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:08:29 crc kubenswrapper[4768]: I1124 18:08:29.330454 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e19be9c1-5ec1-49b0-a86e-92833d2dbf93","Type":"ContainerStarted","Data":"22e1eb295407472cd4bbd150ed0ba8cde9ee1040dabf3329744ba23685627a11"} Nov 24 18:08:29 crc kubenswrapper[4768]: I1124 18:08:29.337906 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qgtgz" event={"ID":"8def680f-a48e-4b0f-9941-0cbb8a626206","Type":"ContainerStarted","Data":"61ad80c7b16201ebc7978162933c85bad525704b18e4eceba5dfcbba71d8d6d3"} Nov 24 18:08:29 crc kubenswrapper[4768]: I1124 18:08:29.340858 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"33b18e38-4235-4db0-a265-a985463b5d5e","Type":"ContainerStarted","Data":"31f37f703182c3583fc19f79d362124eecc4f53f6fc2e9ab5fed03fd097e1034"} Nov 24 18:08:29 crc kubenswrapper[4768]: I1124 18:08:29.342437 4768 generic.go:334] "Generic (PLEG): container finished" podID="46501d01-d421-402b-889b-6135c2c8ef8a" containerID="aa1073748b679b6861dd3338fd83362634999f78e5a485fa8adb1f2df8407987" exitCode=0 Nov 24 18:08:29 crc kubenswrapper[4768]: I1124 18:08:29.342514 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" event={"ID":"46501d01-d421-402b-889b-6135c2c8ef8a","Type":"ContainerDied","Data":"aa1073748b679b6861dd3338fd83362634999f78e5a485fa8adb1f2df8407987"} Nov 24 18:08:29 crc kubenswrapper[4768]: I1124 18:08:29.342538 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" event={"ID":"46501d01-d421-402b-889b-6135c2c8ef8a","Type":"ContainerStarted","Data":"78938fed3a0255362fa28a7e8a517374d96eb06ba4704dfeb016c99bf14fa9d8"} Nov 24 18:08:29 crc kubenswrapper[4768]: I1124 18:08:29.346194 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"435fe835-6f0c-426f-bc16-ec940fba83b0","Type":"ContainerStarted","Data":"b1170e156f908450e96d3fda73fca86d5749c8a3679a49dbfb0ec06e1b11dba2"} Nov 24 18:08:29 crc kubenswrapper[4768]: I1124 18:08:29.356637 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7rkgz"] Nov 24 18:08:29 crc kubenswrapper[4768]: I1124 18:08:29.360656 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qgtgz" podStartSLOduration=2.360642429 podStartE2EDuration="2.360642429s" podCreationTimestamp="2025-11-24 18:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:08:29.35429752 +0000 UTC m=+1148.214879297" watchObservedRunningTime="2025-11-24 18:08:29.360642429 +0000 UTC m=+1148.221224206" Nov 24 18:08:30 crc kubenswrapper[4768]: I1124 18:08:30.360157 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" event={"ID":"46501d01-d421-402b-889b-6135c2c8ef8a","Type":"ContainerStarted","Data":"dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13"} Nov 24 18:08:30 crc kubenswrapper[4768]: I1124 18:08:30.362700 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:30 crc kubenswrapper[4768]: I1124 18:08:30.367583 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7rkgz" event={"ID":"304a3869-b79e-47bc-ad78-0a4a41868b4f","Type":"ContainerStarted","Data":"036a387e45fa1462799505d6a5283c044788f73c5440c68bce8b7cd57ff2299b"} Nov 24 18:08:30 crc kubenswrapper[4768]: I1124 18:08:30.367632 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7rkgz" event={"ID":"304a3869-b79e-47bc-ad78-0a4a41868b4f","Type":"ContainerStarted","Data":"293c5c7bd8eb463faece385fe1fe7dac41eff79f590459c455e7252da4bbf714"} Nov 24 18:08:30 crc kubenswrapper[4768]: I1124 18:08:30.391203 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" podStartSLOduration=3.391176965 podStartE2EDuration="3.391176965s" podCreationTimestamp="2025-11-24 18:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:08:30.386068891 +0000 UTC m=+1149.246650668" watchObservedRunningTime="2025-11-24 18:08:30.391176965 +0000 UTC m=+1149.251758742" Nov 24 18:08:30 crc kubenswrapper[4768]: I1124 18:08:30.411079 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7rkgz" podStartSLOduration=2.411031303 podStartE2EDuration="2.411031303s" podCreationTimestamp="2025-11-24 18:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:08:30.404649044 +0000 UTC m=+1149.265230821" watchObservedRunningTime="2025-11-24 18:08:30.411031303 +0000 UTC m=+1149.271613080" Nov 24 18:08:31 crc kubenswrapper[4768]: I1124 18:08:31.344531 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:31 crc kubenswrapper[4768]: I1124 18:08:31.356665 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 18:08:32 crc kubenswrapper[4768]: I1124 18:08:32.385691 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"435fe835-6f0c-426f-bc16-ec940fba83b0","Type":"ContainerStarted","Data":"37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4"} Nov 24 18:08:32 crc kubenswrapper[4768]: I1124 18:08:32.391033 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"33b18e38-4235-4db0-a265-a985463b5d5e","Type":"ContainerStarted","Data":"a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42"} Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.432430 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e19be9c1-5ec1-49b0-a86e-92833d2dbf93","Type":"ContainerStarted","Data":"b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58"} Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.432798 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e19be9c1-5ec1-49b0-a86e-92833d2dbf93","Type":"ContainerStarted","Data":"72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13"} Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.432563 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e19be9c1-5ec1-49b0-a86e-92833d2dbf93" containerName="nova-metadata-metadata" containerID="cri-o://b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58" gracePeriod=30 Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.432476 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e19be9c1-5ec1-49b0-a86e-92833d2dbf93" containerName="nova-metadata-log" containerID="cri-o://72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13" gracePeriod=30 Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.447462 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="33b18e38-4235-4db0-a265-a985463b5d5e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42" gracePeriod=30 Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.447526 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8","Type":"ContainerStarted","Data":"a8cfd115dda30d626f1b52dff5ce5d6b5a22283b6f5ce046ceb5ad6442e50aef"} Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.448303 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8","Type":"ContainerStarted","Data":"ac4c8066a159441730610f0a5d7d0a0360ebde00a621351fba2cc6908817098e"} Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.458706 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9408766269999997 podStartE2EDuration="6.458687714s" podCreationTimestamp="2025-11-24 18:08:27 +0000 UTC" firstStartedPulling="2025-11-24 18:08:28.484739772 +0000 UTC m=+1147.345321549" lastFinishedPulling="2025-11-24 18:08:32.002550859 +0000 UTC m=+1150.863132636" observedRunningTime="2025-11-24 18:08:33.455592772 +0000 UTC m=+1152.316174549" watchObservedRunningTime="2025-11-24 18:08:33.458687714 +0000 UTC m=+1152.319269491" Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.509874 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.367731042 podStartE2EDuration="6.509846864s" podCreationTimestamp="2025-11-24 18:08:27 +0000 UTC" firstStartedPulling="2025-11-24 18:08:28.861854924 +0000 UTC m=+1147.722436691" lastFinishedPulling="2025-11-24 18:08:32.003970736 +0000 UTC m=+1150.864552513" observedRunningTime="2025-11-24 18:08:33.507230305 +0000 UTC m=+1152.367812082" watchObservedRunningTime="2025-11-24 18:08:33.509846864 +0000 UTC m=+1152.370428641" Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.512560 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.301196935 podStartE2EDuration="6.512554726s" podCreationTimestamp="2025-11-24 18:08:27 +0000 UTC" firstStartedPulling="2025-11-24 18:08:28.792288746 +0000 UTC m=+1147.652870523" lastFinishedPulling="2025-11-24 18:08:32.003646537 +0000 UTC m=+1150.864228314" observedRunningTime="2025-11-24 18:08:33.492132083 +0000 UTC m=+1152.352713860" watchObservedRunningTime="2025-11-24 18:08:33.512554726 +0000 UTC m=+1152.373136513" Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.527931 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.676559755 podStartE2EDuration="6.527910894s" podCreationTimestamp="2025-11-24 18:08:27 +0000 UTC" firstStartedPulling="2025-11-24 18:08:28.153703485 +0000 UTC m=+1147.014285262" lastFinishedPulling="2025-11-24 18:08:32.005054624 +0000 UTC m=+1150.865636401" observedRunningTime="2025-11-24 18:08:33.523829746 +0000 UTC m=+1152.384411533" watchObservedRunningTime="2025-11-24 18:08:33.527910894 +0000 UTC m=+1152.388492671" Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.632815 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 18:08:33 crc kubenswrapper[4768]: I1124 18:08:33.633015 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="296e4b18-c3e3-481d-bad3-0c2427ca013b" containerName="kube-state-metrics" containerID="cri-o://52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0" gracePeriod=30 Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.069813 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.219702 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-logs\") pod \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.219771 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6g8d\" (UniqueName: \"kubernetes.io/projected/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-kube-api-access-w6g8d\") pod \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.219819 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-config-data\") pod \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.220058 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-combined-ca-bundle\") pod \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\" (UID: \"e19be9c1-5ec1-49b0-a86e-92833d2dbf93\") " Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.220154 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-logs" (OuterVolumeSpecName: "logs") pod "e19be9c1-5ec1-49b0-a86e-92833d2dbf93" (UID: "e19be9c1-5ec1-49b0-a86e-92833d2dbf93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.222384 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.231588 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.232146 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-kube-api-access-w6g8d" (OuterVolumeSpecName: "kube-api-access-w6g8d") pod "e19be9c1-5ec1-49b0-a86e-92833d2dbf93" (UID: "e19be9c1-5ec1-49b0-a86e-92833d2dbf93"). InnerVolumeSpecName "kube-api-access-w6g8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.280718 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e19be9c1-5ec1-49b0-a86e-92833d2dbf93" (UID: "e19be9c1-5ec1-49b0-a86e-92833d2dbf93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.281018 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-config-data" (OuterVolumeSpecName: "config-data") pod "e19be9c1-5ec1-49b0-a86e-92833d2dbf93" (UID: "e19be9c1-5ec1-49b0-a86e-92833d2dbf93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.324050 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svc9z\" (UniqueName: \"kubernetes.io/projected/296e4b18-c3e3-481d-bad3-0c2427ca013b-kube-api-access-svc9z\") pod \"296e4b18-c3e3-481d-bad3-0c2427ca013b\" (UID: \"296e4b18-c3e3-481d-bad3-0c2427ca013b\") " Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.324667 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.324685 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6g8d\" (UniqueName: \"kubernetes.io/projected/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-kube-api-access-w6g8d\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.324696 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19be9c1-5ec1-49b0-a86e-92833d2dbf93-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.330780 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296e4b18-c3e3-481d-bad3-0c2427ca013b-kube-api-access-svc9z" (OuterVolumeSpecName: "kube-api-access-svc9z") pod "296e4b18-c3e3-481d-bad3-0c2427ca013b" (UID: "296e4b18-c3e3-481d-bad3-0c2427ca013b"). InnerVolumeSpecName "kube-api-access-svc9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.426401 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svc9z\" (UniqueName: \"kubernetes.io/projected/296e4b18-c3e3-481d-bad3-0c2427ca013b-kube-api-access-svc9z\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.457356 4768 generic.go:334] "Generic (PLEG): container finished" podID="e19be9c1-5ec1-49b0-a86e-92833d2dbf93" containerID="b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58" exitCode=0 Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.457723 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.457739 4768 generic.go:334] "Generic (PLEG): container finished" podID="e19be9c1-5ec1-49b0-a86e-92833d2dbf93" containerID="72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13" exitCode=143 Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.457795 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e19be9c1-5ec1-49b0-a86e-92833d2dbf93","Type":"ContainerDied","Data":"b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58"} Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.457823 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e19be9c1-5ec1-49b0-a86e-92833d2dbf93","Type":"ContainerDied","Data":"72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13"} Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.457833 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e19be9c1-5ec1-49b0-a86e-92833d2dbf93","Type":"ContainerDied","Data":"22e1eb295407472cd4bbd150ed0ba8cde9ee1040dabf3329744ba23685627a11"} Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.457848 4768 scope.go:117] "RemoveContainer" containerID="b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.460633 4768 generic.go:334] "Generic (PLEG): container finished" podID="296e4b18-c3e3-481d-bad3-0c2427ca013b" containerID="52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0" exitCode=2 Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.461551 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.461666 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"296e4b18-c3e3-481d-bad3-0c2427ca013b","Type":"ContainerDied","Data":"52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0"} Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.461727 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"296e4b18-c3e3-481d-bad3-0c2427ca013b","Type":"ContainerDied","Data":"57f3c6bdcfb5e435d3ebd065dfa54dd13798e30cac1734f79bdfccbcd2c96e5a"} Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.479429 4768 scope.go:117] "RemoveContainer" containerID="72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.508734 4768 scope.go:117] "RemoveContainer" containerID="b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58" Nov 24 18:08:34 crc kubenswrapper[4768]: E1124 18:08:34.511087 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58\": container with ID starting with b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58 not found: ID does not exist" containerID="b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.511134 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58"} err="failed to get container status \"b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58\": rpc error: code = NotFound desc = could not find container \"b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58\": container with ID starting with b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58 not found: ID does not exist" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.511164 4768 scope.go:117] "RemoveContainer" containerID="72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13" Nov 24 18:08:34 crc kubenswrapper[4768]: E1124 18:08:34.511645 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13\": container with ID starting with 72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13 not found: ID does not exist" containerID="72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.511689 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13"} err="failed to get container status \"72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13\": rpc error: code = NotFound desc = could not find container \"72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13\": container with ID starting with 72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13 not found: ID does not exist" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.511721 4768 scope.go:117] "RemoveContainer" containerID="b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.512021 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58"} err="failed to get container status \"b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58\": rpc error: code = NotFound desc = could not find container \"b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58\": container with ID starting with b58aae1d7fa49c6d4f2331353f2ba37c79c036f0c81fd579f09532b261baff58 not found: ID does not exist" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.512046 4768 scope.go:117] "RemoveContainer" containerID="72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.512306 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13"} err="failed to get container status \"72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13\": rpc error: code = NotFound desc = could not find container \"72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13\": container with ID starting with 72f6e31764dd79f4f11f6bb26ca64558055117538e00ece97b7057e465232a13 not found: ID does not exist" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.512333 4768 scope.go:117] "RemoveContainer" containerID="52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.521494 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.529516 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.535678 4768 scope.go:117] "RemoveContainer" containerID="52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0" Nov 24 18:08:34 crc kubenswrapper[4768]: E1124 18:08:34.536003 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0\": container with ID starting with 52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0 not found: ID does not exist" containerID="52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.536029 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0"} err="failed to get container status \"52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0\": rpc error: code = NotFound desc = could not find container \"52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0\": container with ID starting with 52a94d4caab75ee028e34591542b97b74ee3d096bbc6ecdf8807d5ac97cb1bb0 not found: ID does not exist" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.544475 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.555163 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.565122 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:34 crc kubenswrapper[4768]: E1124 18:08:34.565544 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296e4b18-c3e3-481d-bad3-0c2427ca013b" containerName="kube-state-metrics" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.565564 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="296e4b18-c3e3-481d-bad3-0c2427ca013b" containerName="kube-state-metrics" Nov 24 18:08:34 crc kubenswrapper[4768]: E1124 18:08:34.565593 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e19be9c1-5ec1-49b0-a86e-92833d2dbf93" containerName="nova-metadata-log" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.565601 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e19be9c1-5ec1-49b0-a86e-92833d2dbf93" containerName="nova-metadata-log" Nov 24 18:08:34 crc kubenswrapper[4768]: E1124 18:08:34.565621 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e19be9c1-5ec1-49b0-a86e-92833d2dbf93" containerName="nova-metadata-metadata" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.565628 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e19be9c1-5ec1-49b0-a86e-92833d2dbf93" containerName="nova-metadata-metadata" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.565812 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e19be9c1-5ec1-49b0-a86e-92833d2dbf93" containerName="nova-metadata-metadata" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.565832 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="296e4b18-c3e3-481d-bad3-0c2427ca013b" containerName="kube-state-metrics" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.565850 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e19be9c1-5ec1-49b0-a86e-92833d2dbf93" containerName="nova-metadata-log" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.566964 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.571742 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.572856 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.573153 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.573180 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.576399 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.576684 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.578969 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.585591 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.630639 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-config-data\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.630720 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/20d3ec89-0004-4ed5-ae4b-c9dcf85a3151-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151\") " pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.630763 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdkjz\" (UniqueName: \"kubernetes.io/projected/84b9666b-4cab-4517-ac64-33e41869c70e-kube-api-access-rdkjz\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.630780 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nhm9\" (UniqueName: \"kubernetes.io/projected/20d3ec89-0004-4ed5-ae4b-c9dcf85a3151-kube-api-access-5nhm9\") pod \"kube-state-metrics-0\" (UID: \"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151\") " pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.630798 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b9666b-4cab-4517-ac64-33e41869c70e-logs\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.630816 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/20d3ec89-0004-4ed5-ae4b-c9dcf85a3151-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151\") " pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.630835 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.630871 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.630936 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d3ec89-0004-4ed5-ae4b-c9dcf85a3151-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151\") " pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.733078 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/20d3ec89-0004-4ed5-ae4b-c9dcf85a3151-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151\") " pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.733167 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdkjz\" (UniqueName: \"kubernetes.io/projected/84b9666b-4cab-4517-ac64-33e41869c70e-kube-api-access-rdkjz\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.733189 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nhm9\" (UniqueName: \"kubernetes.io/projected/20d3ec89-0004-4ed5-ae4b-c9dcf85a3151-kube-api-access-5nhm9\") pod \"kube-state-metrics-0\" (UID: \"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151\") " pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.733209 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b9666b-4cab-4517-ac64-33e41869c70e-logs\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.733234 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/20d3ec89-0004-4ed5-ae4b-c9dcf85a3151-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151\") " pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.733257 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.733303 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.733361 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d3ec89-0004-4ed5-ae4b-c9dcf85a3151-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151\") " pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.733404 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-config-data\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.734070 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b9666b-4cab-4517-ac64-33e41869c70e-logs\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.738387 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d3ec89-0004-4ed5-ae4b-c9dcf85a3151-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151\") " pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.739917 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/20d3ec89-0004-4ed5-ae4b-c9dcf85a3151-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151\") " pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.740237 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/20d3ec89-0004-4ed5-ae4b-c9dcf85a3151-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151\") " pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.740297 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-config-data\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.742930 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.743251 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.750325 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdkjz\" (UniqueName: \"kubernetes.io/projected/84b9666b-4cab-4517-ac64-33e41869c70e-kube-api-access-rdkjz\") pod \"nova-metadata-0\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.761067 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nhm9\" (UniqueName: \"kubernetes.io/projected/20d3ec89-0004-4ed5-ae4b-c9dcf85a3151-kube-api-access-5nhm9\") pod \"kube-state-metrics-0\" (UID: \"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151\") " pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.893217 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.915041 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.926758 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.927198 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="ceilometer-central-agent" containerID="cri-o://5e1a3882422a499a91508cb1c317f32cd3bfb30977f5c09cf2c6aebd3595efb5" gracePeriod=30 Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.927257 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="sg-core" containerID="cri-o://9726bf9e1c01ef800e9f14da964e0bdd67ab8f8c65ecdfdc4c4c98ea0c068635" gracePeriod=30 Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.927315 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="ceilometer-notification-agent" containerID="cri-o://3bd25861c7e572873c4e4e5eeac3d655d08078657de6165f869cab05b383dd6c" gracePeriod=30 Nov 24 18:08:34 crc kubenswrapper[4768]: I1124 18:08:34.927392 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="proxy-httpd" containerID="cri-o://92e09ff839a9ebf93d6a1a1bd626ab8da9d1cf362542ff2a5105bc486e6ebd3e" gracePeriod=30 Nov 24 18:08:35 crc kubenswrapper[4768]: I1124 18:08:35.482031 4768 generic.go:334] "Generic (PLEG): container finished" podID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerID="92e09ff839a9ebf93d6a1a1bd626ab8da9d1cf362542ff2a5105bc486e6ebd3e" exitCode=0 Nov 24 18:08:35 crc kubenswrapper[4768]: I1124 18:08:35.482360 4768 generic.go:334] "Generic (PLEG): container finished" podID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerID="9726bf9e1c01ef800e9f14da964e0bdd67ab8f8c65ecdfdc4c4c98ea0c068635" exitCode=2 Nov 24 18:08:35 crc kubenswrapper[4768]: I1124 18:08:35.482114 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0571fc-4ac1-413b-9253-c3555bdde7b4","Type":"ContainerDied","Data":"92e09ff839a9ebf93d6a1a1bd626ab8da9d1cf362542ff2a5105bc486e6ebd3e"} Nov 24 18:08:35 crc kubenswrapper[4768]: I1124 18:08:35.482434 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0571fc-4ac1-413b-9253-c3555bdde7b4","Type":"ContainerDied","Data":"9726bf9e1c01ef800e9f14da964e0bdd67ab8f8c65ecdfdc4c4c98ea0c068635"} Nov 24 18:08:35 crc kubenswrapper[4768]: I1124 18:08:35.498580 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:35 crc kubenswrapper[4768]: I1124 18:08:35.631359 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 18:08:35 crc kubenswrapper[4768]: W1124 18:08:35.637687 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20d3ec89_0004_4ed5_ae4b_c9dcf85a3151.slice/crio-435841740dd60c8fcbffa66a9434bfe3a462ced6f211e433ab0c03af97b1f407 WatchSource:0}: Error finding container 435841740dd60c8fcbffa66a9434bfe3a462ced6f211e433ab0c03af97b1f407: Status 404 returned error can't find the container with id 435841740dd60c8fcbffa66a9434bfe3a462ced6f211e433ab0c03af97b1f407 Nov 24 18:08:35 crc kubenswrapper[4768]: I1124 18:08:35.912103 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="296e4b18-c3e3-481d-bad3-0c2427ca013b" path="/var/lib/kubelet/pods/296e4b18-c3e3-481d-bad3-0c2427ca013b/volumes" Nov 24 18:08:35 crc kubenswrapper[4768]: I1124 18:08:35.912660 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e19be9c1-5ec1-49b0-a86e-92833d2dbf93" path="/var/lib/kubelet/pods/e19be9c1-5ec1-49b0-a86e-92833d2dbf93/volumes" Nov 24 18:08:36 crc kubenswrapper[4768]: I1124 18:08:36.496512 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84b9666b-4cab-4517-ac64-33e41869c70e","Type":"ContainerStarted","Data":"57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203"} Nov 24 18:08:36 crc kubenswrapper[4768]: I1124 18:08:36.496830 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84b9666b-4cab-4517-ac64-33e41869c70e","Type":"ContainerStarted","Data":"36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2"} Nov 24 18:08:36 crc kubenswrapper[4768]: I1124 18:08:36.496844 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84b9666b-4cab-4517-ac64-33e41869c70e","Type":"ContainerStarted","Data":"a210b40328c49e51852550f760edf53404704d82faa724ae8b1040c1da01d656"} Nov 24 18:08:36 crc kubenswrapper[4768]: I1124 18:08:36.499047 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151","Type":"ContainerStarted","Data":"d7737f788a180f24e07ff470c5f922c8523b430c1ad5d65b1fc6d6be4f2b0424"} Nov 24 18:08:36 crc kubenswrapper[4768]: I1124 18:08:36.499082 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"20d3ec89-0004-4ed5-ae4b-c9dcf85a3151","Type":"ContainerStarted","Data":"435841740dd60c8fcbffa66a9434bfe3a462ced6f211e433ab0c03af97b1f407"} Nov 24 18:08:36 crc kubenswrapper[4768]: I1124 18:08:36.499500 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 18:08:36 crc kubenswrapper[4768]: I1124 18:08:36.506043 4768 generic.go:334] "Generic (PLEG): container finished" podID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerID="5e1a3882422a499a91508cb1c317f32cd3bfb30977f5c09cf2c6aebd3595efb5" exitCode=0 Nov 24 18:08:36 crc kubenswrapper[4768]: I1124 18:08:36.506088 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0571fc-4ac1-413b-9253-c3555bdde7b4","Type":"ContainerDied","Data":"5e1a3882422a499a91508cb1c317f32cd3bfb30977f5c09cf2c6aebd3595efb5"} Nov 24 18:08:36 crc kubenswrapper[4768]: I1124 18:08:36.523611 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.523597274 podStartE2EDuration="2.523597274s" podCreationTimestamp="2025-11-24 18:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:08:36.518572971 +0000 UTC m=+1155.379154748" watchObservedRunningTime="2025-11-24 18:08:36.523597274 +0000 UTC m=+1155.384179051" Nov 24 18:08:37 crc kubenswrapper[4768]: I1124 18:08:37.516741 4768 generic.go:334] "Generic (PLEG): container finished" podID="8def680f-a48e-4b0f-9941-0cbb8a626206" containerID="61ad80c7b16201ebc7978162933c85bad525704b18e4eceba5dfcbba71d8d6d3" exitCode=0 Nov 24 18:08:37 crc kubenswrapper[4768]: I1124 18:08:37.516839 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qgtgz" event={"ID":"8def680f-a48e-4b0f-9941-0cbb8a626206","Type":"ContainerDied","Data":"61ad80c7b16201ebc7978162933c85bad525704b18e4eceba5dfcbba71d8d6d3"} Nov 24 18:08:37 crc kubenswrapper[4768]: I1124 18:08:37.546229 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.110903103 podStartE2EDuration="3.54619605s" podCreationTimestamp="2025-11-24 18:08:34 +0000 UTC" firstStartedPulling="2025-11-24 18:08:35.640157467 +0000 UTC m=+1154.500739244" lastFinishedPulling="2025-11-24 18:08:36.075450414 +0000 UTC m=+1154.936032191" observedRunningTime="2025-11-24 18:08:36.547713065 +0000 UTC m=+1155.408294842" watchObservedRunningTime="2025-11-24 18:08:37.54619605 +0000 UTC m=+1156.406777837" Nov 24 18:08:37 crc kubenswrapper[4768]: I1124 18:08:37.619543 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 18:08:37 crc kubenswrapper[4768]: I1124 18:08:37.619624 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 18:08:37 crc kubenswrapper[4768]: I1124 18:08:37.989569 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.012992 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.013040 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.039048 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.049310 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.060459 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-tjjzp"] Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.561501 4768 generic.go:334] "Generic (PLEG): container finished" podID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerID="3bd25861c7e572873c4e4e5eeac3d655d08078657de6165f869cab05b383dd6c" exitCode=0 Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.562054 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" podUID="3650ec4f-2853-4822-a5ee-47b1b642fdbd" containerName="dnsmasq-dns" containerID="cri-o://2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880" gracePeriod=10 Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.561547 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0571fc-4ac1-413b-9253-c3555bdde7b4","Type":"ContainerDied","Data":"3bd25861c7e572873c4e4e5eeac3d655d08078657de6165f869cab05b383dd6c"} Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.658068 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.711655 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.711707 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.859996 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.959109 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0571fc-4ac1-413b-9253-c3555bdde7b4-log-httpd\") pod \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.959261 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-combined-ca-bundle\") pod \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.959322 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-sg-core-conf-yaml\") pod \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.959377 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nrx6\" (UniqueName: \"kubernetes.io/projected/bb0571fc-4ac1-413b-9253-c3555bdde7b4-kube-api-access-6nrx6\") pod \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.959410 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0571fc-4ac1-413b-9253-c3555bdde7b4-run-httpd\") pod \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.959586 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-config-data\") pod \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.959618 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-scripts\") pod \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\" (UID: \"bb0571fc-4ac1-413b-9253-c3555bdde7b4\") " Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.960745 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0571fc-4ac1-413b-9253-c3555bdde7b4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bb0571fc-4ac1-413b-9253-c3555bdde7b4" (UID: "bb0571fc-4ac1-413b-9253-c3555bdde7b4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.960813 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0571fc-4ac1-413b-9253-c3555bdde7b4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bb0571fc-4ac1-413b-9253-c3555bdde7b4" (UID: "bb0571fc-4ac1-413b-9253-c3555bdde7b4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.967908 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-scripts" (OuterVolumeSpecName: "scripts") pod "bb0571fc-4ac1-413b-9253-c3555bdde7b4" (UID: "bb0571fc-4ac1-413b-9253-c3555bdde7b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:38 crc kubenswrapper[4768]: I1124 18:08:38.995018 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0571fc-4ac1-413b-9253-c3555bdde7b4-kube-api-access-6nrx6" (OuterVolumeSpecName: "kube-api-access-6nrx6") pod "bb0571fc-4ac1-413b-9253-c3555bdde7b4" (UID: "bb0571fc-4ac1-413b-9253-c3555bdde7b4"). InnerVolumeSpecName "kube-api-access-6nrx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.022973 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.029764 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bb0571fc-4ac1-413b-9253-c3555bdde7b4" (UID: "bb0571fc-4ac1-413b-9253-c3555bdde7b4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.061688 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.061711 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nrx6\" (UniqueName: \"kubernetes.io/projected/bb0571fc-4ac1-413b-9253-c3555bdde7b4-kube-api-access-6nrx6\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.061720 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0571fc-4ac1-413b-9253-c3555bdde7b4-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.061730 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.061738 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb0571fc-4ac1-413b-9253-c3555bdde7b4-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.083632 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb0571fc-4ac1-413b-9253-c3555bdde7b4" (UID: "bb0571fc-4ac1-413b-9253-c3555bdde7b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.097923 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-config-data" (OuterVolumeSpecName: "config-data") pod "bb0571fc-4ac1-413b-9253-c3555bdde7b4" (UID: "bb0571fc-4ac1-413b-9253-c3555bdde7b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.162380 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-combined-ca-bundle\") pod \"8def680f-a48e-4b0f-9941-0cbb8a626206\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.162596 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-config-data\") pod \"8def680f-a48e-4b0f-9941-0cbb8a626206\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.162613 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-scripts\") pod \"8def680f-a48e-4b0f-9941-0cbb8a626206\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.162665 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw699\" (UniqueName: \"kubernetes.io/projected/8def680f-a48e-4b0f-9941-0cbb8a626206-kube-api-access-mw699\") pod \"8def680f-a48e-4b0f-9941-0cbb8a626206\" (UID: \"8def680f-a48e-4b0f-9941-0cbb8a626206\") " Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.163052 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.163068 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0571fc-4ac1-413b-9253-c3555bdde7b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.165832 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-scripts" (OuterVolumeSpecName: "scripts") pod "8def680f-a48e-4b0f-9941-0cbb8a626206" (UID: "8def680f-a48e-4b0f-9941-0cbb8a626206"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.166288 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8def680f-a48e-4b0f-9941-0cbb8a626206-kube-api-access-mw699" (OuterVolumeSpecName: "kube-api-access-mw699") pod "8def680f-a48e-4b0f-9941-0cbb8a626206" (UID: "8def680f-a48e-4b0f-9941-0cbb8a626206"). InnerVolumeSpecName "kube-api-access-mw699". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.191653 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8def680f-a48e-4b0f-9941-0cbb8a626206" (UID: "8def680f-a48e-4b0f-9941-0cbb8a626206"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.198875 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-config-data" (OuterVolumeSpecName: "config-data") pod "8def680f-a48e-4b0f-9941-0cbb8a626206" (UID: "8def680f-a48e-4b0f-9941-0cbb8a626206"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.264436 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.264469 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.264479 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw699\" (UniqueName: \"kubernetes.io/projected/8def680f-a48e-4b0f-9941-0cbb8a626206-kube-api-access-mw699\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.264507 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8def680f-a48e-4b0f-9941-0cbb8a626206-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.553106 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.575538 4768 generic.go:334] "Generic (PLEG): container finished" podID="3650ec4f-2853-4822-a5ee-47b1b642fdbd" containerID="2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880" exitCode=0 Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.575590 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.575628 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" event={"ID":"3650ec4f-2853-4822-a5ee-47b1b642fdbd","Type":"ContainerDied","Data":"2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880"} Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.575731 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-tjjzp" event={"ID":"3650ec4f-2853-4822-a5ee-47b1b642fdbd","Type":"ContainerDied","Data":"f5c5048976d0c8aa8b32d95ad38df457543a10993ba95eab2d305da58659be17"} Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.575793 4768 scope.go:117] "RemoveContainer" containerID="2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.578325 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qgtgz" event={"ID":"8def680f-a48e-4b0f-9941-0cbb8a626206","Type":"ContainerDied","Data":"b84745f8f9dd9dc0b17ebeba424e6b20ad02da3adb1c88d22fadc1aa986e07c1"} Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.578368 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b84745f8f9dd9dc0b17ebeba424e6b20ad02da3adb1c88d22fadc1aa986e07c1" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.578438 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qgtgz" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.594002 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb0571fc-4ac1-413b-9253-c3555bdde7b4","Type":"ContainerDied","Data":"1f5284a14c05abfa8f87dc1704f534d1c90c9831fecbcd048cce6aa9c41405a9"} Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.594315 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.601030 4768 generic.go:334] "Generic (PLEG): container finished" podID="304a3869-b79e-47bc-ad78-0a4a41868b4f" containerID="036a387e45fa1462799505d6a5283c044788f73c5440c68bce8b7cd57ff2299b" exitCode=0 Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.601310 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7rkgz" event={"ID":"304a3869-b79e-47bc-ad78-0a4a41868b4f","Type":"ContainerDied","Data":"036a387e45fa1462799505d6a5283c044788f73c5440c68bce8b7cd57ff2299b"} Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.625510 4768 scope.go:117] "RemoveContainer" containerID="808857ce12e0fd0c5bbe9655b2f80034383d24b1e84c03ab1a14c2941fa4caab" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.666734 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.680092 4768 scope.go:117] "RemoveContainer" containerID="2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880" Nov 24 18:08:39 crc kubenswrapper[4768]: E1124 18:08:39.681870 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880\": container with ID starting with 2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880 not found: ID does not exist" containerID="2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.681917 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880"} err="failed to get container status \"2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880\": rpc error: code = NotFound desc = could not find container \"2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880\": container with ID starting with 2a005cb526f19a9ccbccdd61ed622b4906ab250e07e8c38c12a3c09905530880 not found: ID does not exist" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.681946 4768 scope.go:117] "RemoveContainer" containerID="808857ce12e0fd0c5bbe9655b2f80034383d24b1e84c03ab1a14c2941fa4caab" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.684548 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:08:39 crc kubenswrapper[4768]: E1124 18:08:39.686111 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"808857ce12e0fd0c5bbe9655b2f80034383d24b1e84c03ab1a14c2941fa4caab\": container with ID starting with 808857ce12e0fd0c5bbe9655b2f80034383d24b1e84c03ab1a14c2941fa4caab not found: ID does not exist" containerID="808857ce12e0fd0c5bbe9655b2f80034383d24b1e84c03ab1a14c2941fa4caab" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.686166 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"808857ce12e0fd0c5bbe9655b2f80034383d24b1e84c03ab1a14c2941fa4caab"} err="failed to get container status \"808857ce12e0fd0c5bbe9655b2f80034383d24b1e84c03ab1a14c2941fa4caab\": rpc error: code = NotFound desc = could not find container \"808857ce12e0fd0c5bbe9655b2f80034383d24b1e84c03ab1a14c2941fa4caab\": container with ID starting with 808857ce12e0fd0c5bbe9655b2f80034383d24b1e84c03ab1a14c2941fa4caab not found: ID does not exist" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.686199 4768 scope.go:117] "RemoveContainer" containerID="92e09ff839a9ebf93d6a1a1bd626ab8da9d1cf362542ff2a5105bc486e6ebd3e" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.688827 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-ovsdbserver-nb\") pod \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.688899 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr2pc\" (UniqueName: \"kubernetes.io/projected/3650ec4f-2853-4822-a5ee-47b1b642fdbd-kube-api-access-rr2pc\") pod \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.689154 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-config\") pod \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.689187 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-dns-svc\") pod \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.689266 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-ovsdbserver-sb\") pod \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\" (UID: \"3650ec4f-2853-4822-a5ee-47b1b642fdbd\") " Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.697605 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:08:39 crc kubenswrapper[4768]: E1124 18:08:39.698007 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="proxy-httpd" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698027 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="proxy-httpd" Nov 24 18:08:39 crc kubenswrapper[4768]: E1124 18:08:39.698039 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3650ec4f-2853-4822-a5ee-47b1b642fdbd" containerName="dnsmasq-dns" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698046 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3650ec4f-2853-4822-a5ee-47b1b642fdbd" containerName="dnsmasq-dns" Nov 24 18:08:39 crc kubenswrapper[4768]: E1124 18:08:39.698060 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8def680f-a48e-4b0f-9941-0cbb8a626206" containerName="nova-manage" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698068 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8def680f-a48e-4b0f-9941-0cbb8a626206" containerName="nova-manage" Nov 24 18:08:39 crc kubenswrapper[4768]: E1124 18:08:39.698085 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="ceilometer-notification-agent" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698091 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="ceilometer-notification-agent" Nov 24 18:08:39 crc kubenswrapper[4768]: E1124 18:08:39.698104 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3650ec4f-2853-4822-a5ee-47b1b642fdbd" containerName="init" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698110 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3650ec4f-2853-4822-a5ee-47b1b642fdbd" containerName="init" Nov 24 18:08:39 crc kubenswrapper[4768]: E1124 18:08:39.698121 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="ceilometer-central-agent" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698127 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="ceilometer-central-agent" Nov 24 18:08:39 crc kubenswrapper[4768]: E1124 18:08:39.698156 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="sg-core" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698162 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="sg-core" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698316 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="ceilometer-central-agent" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698335 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3650ec4f-2853-4822-a5ee-47b1b642fdbd" containerName="dnsmasq-dns" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698345 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="proxy-httpd" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698357 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="ceilometer-notification-agent" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698366 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" containerName="sg-core" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.698373 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8def680f-a48e-4b0f-9941-0cbb8a626206" containerName="nova-manage" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.699950 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.701885 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.702914 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.703149 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.708841 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3650ec4f-2853-4822-a5ee-47b1b642fdbd-kube-api-access-rr2pc" (OuterVolumeSpecName: "kube-api-access-rr2pc") pod "3650ec4f-2853-4822-a5ee-47b1b642fdbd" (UID: "3650ec4f-2853-4822-a5ee-47b1b642fdbd"). InnerVolumeSpecName "kube-api-access-rr2pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.725658 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.764097 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3650ec4f-2853-4822-a5ee-47b1b642fdbd" (UID: "3650ec4f-2853-4822-a5ee-47b1b642fdbd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.772393 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-config" (OuterVolumeSpecName: "config") pod "3650ec4f-2853-4822-a5ee-47b1b642fdbd" (UID: "3650ec4f-2853-4822-a5ee-47b1b642fdbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.773101 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3650ec4f-2853-4822-a5ee-47b1b642fdbd" (UID: "3650ec4f-2853-4822-a5ee-47b1b642fdbd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.780957 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3650ec4f-2853-4822-a5ee-47b1b642fdbd" (UID: "3650ec4f-2853-4822-a5ee-47b1b642fdbd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.791593 4768 scope.go:117] "RemoveContainer" containerID="9726bf9e1c01ef800e9f14da964e0bdd67ab8f8c65ecdfdc4c4c98ea0c068635" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.796436 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.798332 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.798376 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr2pc\" (UniqueName: \"kubernetes.io/projected/3650ec4f-2853-4822-a5ee-47b1b642fdbd-kube-api-access-rr2pc\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.798393 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.798403 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3650ec4f-2853-4822-a5ee-47b1b642fdbd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.813460 4768 scope.go:117] "RemoveContainer" containerID="3bd25861c7e572873c4e4e5eeac3d655d08078657de6165f869cab05b383dd6c" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.846263 4768 scope.go:117] "RemoveContainer" containerID="5e1a3882422a499a91508cb1c317f32cd3bfb30977f5c09cf2c6aebd3595efb5" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.856393 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.856757 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" containerName="nova-api-log" containerID="cri-o://ac4c8066a159441730610f0a5d7d0a0360ebde00a621351fba2cc6908817098e" gracePeriod=30 Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.857430 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" containerName="nova-api-api" containerID="cri-o://a8cfd115dda30d626f1b52dff5ce5d6b5a22283b6f5ce046ceb5ad6442e50aef" gracePeriod=30 Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.893326 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.894212 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.909427 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb08325b-f3c3-424f-a7c3-5796cbd7edab-log-httpd\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.910878 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-config-data\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.911322 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb08325b-f3c3-424f-a7c3-5796cbd7edab-run-httpd\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.911397 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-scripts\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.911732 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.911922 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.912395 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjfpx\" (UniqueName: \"kubernetes.io/projected/eb08325b-f3c3-424f-a7c3-5796cbd7edab-kube-api-access-bjfpx\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.912600 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.948455 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0571fc-4ac1-413b-9253-c3555bdde7b4" path="/var/lib/kubelet/pods/bb0571fc-4ac1-413b-9253-c3555bdde7b4/volumes" Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.950077 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.950116 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.950131 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-tjjzp"] Nov 24 18:08:39 crc kubenswrapper[4768]: I1124 18:08:39.952389 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-tjjzp"] Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.019779 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-scripts\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.019819 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.019866 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.019888 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjfpx\" (UniqueName: \"kubernetes.io/projected/eb08325b-f3c3-424f-a7c3-5796cbd7edab-kube-api-access-bjfpx\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.019919 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.020044 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb08325b-f3c3-424f-a7c3-5796cbd7edab-log-httpd\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.020070 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-config-data\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.020099 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb08325b-f3c3-424f-a7c3-5796cbd7edab-run-httpd\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.020574 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb08325b-f3c3-424f-a7c3-5796cbd7edab-run-httpd\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.021479 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb08325b-f3c3-424f-a7c3-5796cbd7edab-log-httpd\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.025017 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.025430 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.025835 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-scripts\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.027042 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.027931 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-config-data\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.039011 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjfpx\" (UniqueName: \"kubernetes.io/projected/eb08325b-f3c3-424f-a7c3-5796cbd7edab-kube-api-access-bjfpx\") pod \"ceilometer-0\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.109726 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.643453 4768 generic.go:334] "Generic (PLEG): container finished" podID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" containerID="ac4c8066a159441730610f0a5d7d0a0360ebde00a621351fba2cc6908817098e" exitCode=143 Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.643695 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8","Type":"ContainerDied","Data":"ac4c8066a159441730610f0a5d7d0a0360ebde00a621351fba2cc6908817098e"} Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.645275 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="435fe835-6f0c-426f-bc16-ec940fba83b0" containerName="nova-scheduler-scheduler" containerID="cri-o://37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4" gracePeriod=30 Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.824637 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:08:40 crc kubenswrapper[4768]: I1124 18:08:40.971728 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.147146 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-config-data\") pod \"304a3869-b79e-47bc-ad78-0a4a41868b4f\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.147335 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47lg\" (UniqueName: \"kubernetes.io/projected/304a3869-b79e-47bc-ad78-0a4a41868b4f-kube-api-access-v47lg\") pod \"304a3869-b79e-47bc-ad78-0a4a41868b4f\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.147354 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-scripts\") pod \"304a3869-b79e-47bc-ad78-0a4a41868b4f\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.147382 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-combined-ca-bundle\") pod \"304a3869-b79e-47bc-ad78-0a4a41868b4f\" (UID: \"304a3869-b79e-47bc-ad78-0a4a41868b4f\") " Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.154397 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-scripts" (OuterVolumeSpecName: "scripts") pod "304a3869-b79e-47bc-ad78-0a4a41868b4f" (UID: "304a3869-b79e-47bc-ad78-0a4a41868b4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.156668 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/304a3869-b79e-47bc-ad78-0a4a41868b4f-kube-api-access-v47lg" (OuterVolumeSpecName: "kube-api-access-v47lg") pod "304a3869-b79e-47bc-ad78-0a4a41868b4f" (UID: "304a3869-b79e-47bc-ad78-0a4a41868b4f"). InnerVolumeSpecName "kube-api-access-v47lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.179315 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "304a3869-b79e-47bc-ad78-0a4a41868b4f" (UID: "304a3869-b79e-47bc-ad78-0a4a41868b4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.182029 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-config-data" (OuterVolumeSpecName: "config-data") pod "304a3869-b79e-47bc-ad78-0a4a41868b4f" (UID: "304a3869-b79e-47bc-ad78-0a4a41868b4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.249789 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47lg\" (UniqueName: \"kubernetes.io/projected/304a3869-b79e-47bc-ad78-0a4a41868b4f-kube-api-access-v47lg\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.249823 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.249834 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.249842 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/304a3869-b79e-47bc-ad78-0a4a41868b4f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.654948 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb08325b-f3c3-424f-a7c3-5796cbd7edab","Type":"ContainerStarted","Data":"43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822"} Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.655636 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb08325b-f3c3-424f-a7c3-5796cbd7edab","Type":"ContainerStarted","Data":"e7c5ce32c45fccbe10ef60dde3432199370b00783d3e4b8543a8494351f1d897"} Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.660478 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="84b9666b-4cab-4517-ac64-33e41869c70e" containerName="nova-metadata-log" containerID="cri-o://36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2" gracePeriod=30 Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.660602 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7rkgz" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.662962 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7rkgz" event={"ID":"304a3869-b79e-47bc-ad78-0a4a41868b4f","Type":"ContainerDied","Data":"293c5c7bd8eb463faece385fe1fe7dac41eff79f590459c455e7252da4bbf714"} Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.663017 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="293c5c7bd8eb463faece385fe1fe7dac41eff79f590459c455e7252da4bbf714" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.663114 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="84b9666b-4cab-4517-ac64-33e41869c70e" containerName="nova-metadata-metadata" containerID="cri-o://57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203" gracePeriod=30 Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.714180 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 18:08:41 crc kubenswrapper[4768]: E1124 18:08:41.714829 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="304a3869-b79e-47bc-ad78-0a4a41868b4f" containerName="nova-cell1-conductor-db-sync" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.714859 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="304a3869-b79e-47bc-ad78-0a4a41868b4f" containerName="nova-cell1-conductor-db-sync" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.715183 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="304a3869-b79e-47bc-ad78-0a4a41868b4f" containerName="nova-cell1-conductor-db-sync" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.716049 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.718092 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.728350 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.860041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9883b617-fef7-4b4e-9856-e7075ba94d9e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9883b617-fef7-4b4e-9856-e7075ba94d9e\") " pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.860146 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9883b617-fef7-4b4e-9856-e7075ba94d9e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9883b617-fef7-4b4e-9856-e7075ba94d9e\") " pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.860195 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6twjr\" (UniqueName: \"kubernetes.io/projected/9883b617-fef7-4b4e-9856-e7075ba94d9e-kube-api-access-6twjr\") pod \"nova-cell1-conductor-0\" (UID: \"9883b617-fef7-4b4e-9856-e7075ba94d9e\") " pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.909707 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3650ec4f-2853-4822-a5ee-47b1b642fdbd" path="/var/lib/kubelet/pods/3650ec4f-2853-4822-a5ee-47b1b642fdbd/volumes" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.962393 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9883b617-fef7-4b4e-9856-e7075ba94d9e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9883b617-fef7-4b4e-9856-e7075ba94d9e\") " pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.962492 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6twjr\" (UniqueName: \"kubernetes.io/projected/9883b617-fef7-4b4e-9856-e7075ba94d9e-kube-api-access-6twjr\") pod \"nova-cell1-conductor-0\" (UID: \"9883b617-fef7-4b4e-9856-e7075ba94d9e\") " pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.962554 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9883b617-fef7-4b4e-9856-e7075ba94d9e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9883b617-fef7-4b4e-9856-e7075ba94d9e\") " pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.968729 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9883b617-fef7-4b4e-9856-e7075ba94d9e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9883b617-fef7-4b4e-9856-e7075ba94d9e\") " pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.975785 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9883b617-fef7-4b4e-9856-e7075ba94d9e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9883b617-fef7-4b4e-9856-e7075ba94d9e\") " pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:41 crc kubenswrapper[4768]: I1124 18:08:41.985359 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6twjr\" (UniqueName: \"kubernetes.io/projected/9883b617-fef7-4b4e-9856-e7075ba94d9e-kube-api-access-6twjr\") pod \"nova-cell1-conductor-0\" (UID: \"9883b617-fef7-4b4e-9856-e7075ba94d9e\") " pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.043602 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.247262 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.372964 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b9666b-4cab-4517-ac64-33e41869c70e-logs\") pod \"84b9666b-4cab-4517-ac64-33e41869c70e\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.373127 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdkjz\" (UniqueName: \"kubernetes.io/projected/84b9666b-4cab-4517-ac64-33e41869c70e-kube-api-access-rdkjz\") pod \"84b9666b-4cab-4517-ac64-33e41869c70e\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.373155 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-nova-metadata-tls-certs\") pod \"84b9666b-4cab-4517-ac64-33e41869c70e\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.373326 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-config-data\") pod \"84b9666b-4cab-4517-ac64-33e41869c70e\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.373417 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-combined-ca-bundle\") pod \"84b9666b-4cab-4517-ac64-33e41869c70e\" (UID: \"84b9666b-4cab-4517-ac64-33e41869c70e\") " Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.373767 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84b9666b-4cab-4517-ac64-33e41869c70e-logs" (OuterVolumeSpecName: "logs") pod "84b9666b-4cab-4517-ac64-33e41869c70e" (UID: "84b9666b-4cab-4517-ac64-33e41869c70e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.376392 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b9666b-4cab-4517-ac64-33e41869c70e-kube-api-access-rdkjz" (OuterVolumeSpecName: "kube-api-access-rdkjz") pod "84b9666b-4cab-4517-ac64-33e41869c70e" (UID: "84b9666b-4cab-4517-ac64-33e41869c70e"). InnerVolumeSpecName "kube-api-access-rdkjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.400432 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-config-data" (OuterVolumeSpecName: "config-data") pod "84b9666b-4cab-4517-ac64-33e41869c70e" (UID: "84b9666b-4cab-4517-ac64-33e41869c70e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.406838 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84b9666b-4cab-4517-ac64-33e41869c70e" (UID: "84b9666b-4cab-4517-ac64-33e41869c70e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.439072 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "84b9666b-4cab-4517-ac64-33e41869c70e" (UID: "84b9666b-4cab-4517-ac64-33e41869c70e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.475508 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.475536 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b9666b-4cab-4517-ac64-33e41869c70e-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.475546 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdkjz\" (UniqueName: \"kubernetes.io/projected/84b9666b-4cab-4517-ac64-33e41869c70e-kube-api-access-rdkjz\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.475556 4768 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.475566 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b9666b-4cab-4517-ac64-33e41869c70e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.545658 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 18:08:42 crc kubenswrapper[4768]: W1124 18:08:42.551966 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9883b617_fef7_4b4e_9856_e7075ba94d9e.slice/crio-c0c7d6181e591e4b228d9bfa71d1ff13d66597ec081a63c6f517408428f0ad68 WatchSource:0}: Error finding container c0c7d6181e591e4b228d9bfa71d1ff13d66597ec081a63c6f517408428f0ad68: Status 404 returned error can't find the container with id c0c7d6181e591e4b228d9bfa71d1ff13d66597ec081a63c6f517408428f0ad68 Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.672436 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb08325b-f3c3-424f-a7c3-5796cbd7edab","Type":"ContainerStarted","Data":"4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48"} Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.677796 4768 generic.go:334] "Generic (PLEG): container finished" podID="84b9666b-4cab-4517-ac64-33e41869c70e" containerID="57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203" exitCode=0 Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.677835 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84b9666b-4cab-4517-ac64-33e41869c70e","Type":"ContainerDied","Data":"57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203"} Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.677867 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84b9666b-4cab-4517-ac64-33e41869c70e","Type":"ContainerDied","Data":"36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2"} Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.677844 4768 generic.go:334] "Generic (PLEG): container finished" podID="84b9666b-4cab-4517-ac64-33e41869c70e" containerID="36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2" exitCode=143 Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.677887 4768 scope.go:117] "RemoveContainer" containerID="57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.677820 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.677992 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84b9666b-4cab-4517-ac64-33e41869c70e","Type":"ContainerDied","Data":"a210b40328c49e51852550f760edf53404704d82faa724ae8b1040c1da01d656"} Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.683709 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9883b617-fef7-4b4e-9856-e7075ba94d9e","Type":"ContainerStarted","Data":"c0c7d6181e591e4b228d9bfa71d1ff13d66597ec081a63c6f517408428f0ad68"} Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.738222 4768 scope.go:117] "RemoveContainer" containerID="36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.738771 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.752102 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.765538 4768 scope.go:117] "RemoveContainer" containerID="57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203" Nov 24 18:08:42 crc kubenswrapper[4768]: E1124 18:08:42.766061 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203\": container with ID starting with 57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203 not found: ID does not exist" containerID="57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.766104 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203"} err="failed to get container status \"57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203\": rpc error: code = NotFound desc = could not find container \"57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203\": container with ID starting with 57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203 not found: ID does not exist" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.766134 4768 scope.go:117] "RemoveContainer" containerID="36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.766205 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:42 crc kubenswrapper[4768]: E1124 18:08:42.766845 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b9666b-4cab-4517-ac64-33e41869c70e" containerName="nova-metadata-log" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.766872 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b9666b-4cab-4517-ac64-33e41869c70e" containerName="nova-metadata-log" Nov 24 18:08:42 crc kubenswrapper[4768]: E1124 18:08:42.766934 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b9666b-4cab-4517-ac64-33e41869c70e" containerName="nova-metadata-metadata" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.766947 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b9666b-4cab-4517-ac64-33e41869c70e" containerName="nova-metadata-metadata" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.767181 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b9666b-4cab-4517-ac64-33e41869c70e" containerName="nova-metadata-metadata" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.767222 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b9666b-4cab-4517-ac64-33e41869c70e" containerName="nova-metadata-log" Nov 24 18:08:42 crc kubenswrapper[4768]: E1124 18:08:42.768184 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2\": container with ID starting with 36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2 not found: ID does not exist" containerID="36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.768244 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2"} err="failed to get container status \"36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2\": rpc error: code = NotFound desc = could not find container \"36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2\": container with ID starting with 36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2 not found: ID does not exist" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.768294 4768 scope.go:117] "RemoveContainer" containerID="57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.768746 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.769547 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203"} err="failed to get container status \"57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203\": rpc error: code = NotFound desc = could not find container \"57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203\": container with ID starting with 57bc1850ff30ef144147a245688255de053c04c3821b8ead3dbf31dfbcd4e203 not found: ID does not exist" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.769570 4768 scope.go:117] "RemoveContainer" containerID="36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.769778 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2"} err="failed to get container status \"36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2\": rpc error: code = NotFound desc = could not find container \"36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2\": container with ID starting with 36a7542d38b3a3bc2b89c30137e462a789e5eac34722c096803f0cc03d25e3f2 not found: ID does not exist" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.773031 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.773329 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.776048 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.781250 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.781305 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92jv\" (UniqueName: \"kubernetes.io/projected/ceef9077-3c84-430c-97e3-965f6eb58b7c-kube-api-access-v92jv\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.781368 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-config-data\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.781389 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.781433 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceef9077-3c84-430c-97e3-965f6eb58b7c-logs\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.882882 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-config-data\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.882932 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.883212 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceef9077-3c84-430c-97e3-965f6eb58b7c-logs\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.883320 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.883366 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92jv\" (UniqueName: \"kubernetes.io/projected/ceef9077-3c84-430c-97e3-965f6eb58b7c-kube-api-access-v92jv\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.884102 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceef9077-3c84-430c-97e3-965f6eb58b7c-logs\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.886545 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-config-data\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.890241 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.890617 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:42 crc kubenswrapper[4768]: I1124 18:08:42.900459 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92jv\" (UniqueName: \"kubernetes.io/projected/ceef9077-3c84-430c-97e3-965f6eb58b7c-kube-api-access-v92jv\") pod \"nova-metadata-0\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " pod="openstack/nova-metadata-0" Nov 24 18:08:43 crc kubenswrapper[4768]: E1124 18:08:43.014901 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 18:08:43 crc kubenswrapper[4768]: E1124 18:08:43.016672 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 18:08:43 crc kubenswrapper[4768]: E1124 18:08:43.018314 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 18:08:43 crc kubenswrapper[4768]: E1124 18:08:43.018367 4768 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="435fe835-6f0c-426f-bc16-ec940fba83b0" containerName="nova-scheduler-scheduler" Nov 24 18:08:43 crc kubenswrapper[4768]: I1124 18:08:43.093321 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:08:43 crc kubenswrapper[4768]: I1124 18:08:43.587045 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:08:43 crc kubenswrapper[4768]: W1124 18:08:43.596652 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceef9077_3c84_430c_97e3_965f6eb58b7c.slice/crio-ea1935e2513e6c07f65ab87c57602e0fabe086c5669349033563355b51292f4f WatchSource:0}: Error finding container ea1935e2513e6c07f65ab87c57602e0fabe086c5669349033563355b51292f4f: Status 404 returned error can't find the container with id ea1935e2513e6c07f65ab87c57602e0fabe086c5669349033563355b51292f4f Nov 24 18:08:43 crc kubenswrapper[4768]: I1124 18:08:43.656675 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:08:43 crc kubenswrapper[4768]: I1124 18:08:43.656744 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:08:43 crc kubenswrapper[4768]: I1124 18:08:43.656801 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 18:08:43 crc kubenswrapper[4768]: I1124 18:08:43.657943 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99ccf8cd01116f9aed046232143cdd0d069d3d1d4cac3ec060c0e2b82cb26f4b"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 18:08:43 crc kubenswrapper[4768]: I1124 18:08:43.658057 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://99ccf8cd01116f9aed046232143cdd0d069d3d1d4cac3ec060c0e2b82cb26f4b" gracePeriod=600 Nov 24 18:08:43 crc kubenswrapper[4768]: I1124 18:08:43.696787 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9883b617-fef7-4b4e-9856-e7075ba94d9e","Type":"ContainerStarted","Data":"268960c113bca2508ad7ce44ed4b744de777f3020fba056c60fe30ae1fbeb4aa"} Nov 24 18:08:43 crc kubenswrapper[4768]: I1124 18:08:43.697432 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:43 crc kubenswrapper[4768]: I1124 18:08:43.700170 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ceef9077-3c84-430c-97e3-965f6eb58b7c","Type":"ContainerStarted","Data":"ea1935e2513e6c07f65ab87c57602e0fabe086c5669349033563355b51292f4f"} Nov 24 18:08:43 crc kubenswrapper[4768]: I1124 18:08:43.719012 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.718990432 podStartE2EDuration="2.718990432s" podCreationTimestamp="2025-11-24 18:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:08:43.713152737 +0000 UTC m=+1162.573734514" watchObservedRunningTime="2025-11-24 18:08:43.718990432 +0000 UTC m=+1162.579572209" Nov 24 18:08:43 crc kubenswrapper[4768]: I1124 18:08:43.911826 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84b9666b-4cab-4517-ac64-33e41869c70e" path="/var/lib/kubelet/pods/84b9666b-4cab-4517-ac64-33e41869c70e/volumes" Nov 24 18:08:44 crc kubenswrapper[4768]: I1124 18:08:44.716700 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="99ccf8cd01116f9aed046232143cdd0d069d3d1d4cac3ec060c0e2b82cb26f4b" exitCode=0 Nov 24 18:08:44 crc kubenswrapper[4768]: I1124 18:08:44.716779 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"99ccf8cd01116f9aed046232143cdd0d069d3d1d4cac3ec060c0e2b82cb26f4b"} Nov 24 18:08:44 crc kubenswrapper[4768]: I1124 18:08:44.717168 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"40df835a5ec9cfe7b392f2013854288a324716103ffb3a94522610c0a0ffe19d"} Nov 24 18:08:44 crc kubenswrapper[4768]: I1124 18:08:44.717196 4768 scope.go:117] "RemoveContainer" containerID="5b1fcca249f25d296bfba4402fd65255a8a672ed04eb8c495487a6905cab2500" Nov 24 18:08:44 crc kubenswrapper[4768]: I1124 18:08:44.723401 4768 generic.go:334] "Generic (PLEG): container finished" podID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" containerID="a8cfd115dda30d626f1b52dff5ce5d6b5a22283b6f5ce046ceb5ad6442e50aef" exitCode=0 Nov 24 18:08:44 crc kubenswrapper[4768]: I1124 18:08:44.723552 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8","Type":"ContainerDied","Data":"a8cfd115dda30d626f1b52dff5ce5d6b5a22283b6f5ce046ceb5ad6442e50aef"} Nov 24 18:08:44 crc kubenswrapper[4768]: I1124 18:08:44.728537 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb08325b-f3c3-424f-a7c3-5796cbd7edab","Type":"ContainerStarted","Data":"918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24"} Nov 24 18:08:44 crc kubenswrapper[4768]: I1124 18:08:44.732660 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ceef9077-3c84-430c-97e3-965f6eb58b7c","Type":"ContainerStarted","Data":"40fd04b493c459aa04293c07a86b4daae3bd6802128b90cff665783fd72a3587"} Nov 24 18:08:44 crc kubenswrapper[4768]: I1124 18:08:44.732716 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ceef9077-3c84-430c-97e3-965f6eb58b7c","Type":"ContainerStarted","Data":"2183e52e0e31f9affdb546caa2c49cd9253df65a3849760bf17010c659d6d6b3"} Nov 24 18:08:44 crc kubenswrapper[4768]: I1124 18:08:44.783216 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.783188613 podStartE2EDuration="2.783188613s" podCreationTimestamp="2025-11-24 18:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:08:44.757688896 +0000 UTC m=+1163.618270673" watchObservedRunningTime="2025-11-24 18:08:44.783188613 +0000 UTC m=+1163.643770390" Nov 24 18:08:44 crc kubenswrapper[4768]: I1124 18:08:44.940237 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.236898 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.337215 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pcnr\" (UniqueName: \"kubernetes.io/projected/435fe835-6f0c-426f-bc16-ec940fba83b0-kube-api-access-8pcnr\") pod \"435fe835-6f0c-426f-bc16-ec940fba83b0\" (UID: \"435fe835-6f0c-426f-bc16-ec940fba83b0\") " Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.337671 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/435fe835-6f0c-426f-bc16-ec940fba83b0-config-data\") pod \"435fe835-6f0c-426f-bc16-ec940fba83b0\" (UID: \"435fe835-6f0c-426f-bc16-ec940fba83b0\") " Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.337783 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435fe835-6f0c-426f-bc16-ec940fba83b0-combined-ca-bundle\") pod \"435fe835-6f0c-426f-bc16-ec940fba83b0\" (UID: \"435fe835-6f0c-426f-bc16-ec940fba83b0\") " Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.344107 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435fe835-6f0c-426f-bc16-ec940fba83b0-kube-api-access-8pcnr" (OuterVolumeSpecName: "kube-api-access-8pcnr") pod "435fe835-6f0c-426f-bc16-ec940fba83b0" (UID: "435fe835-6f0c-426f-bc16-ec940fba83b0"). InnerVolumeSpecName "kube-api-access-8pcnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.378301 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435fe835-6f0c-426f-bc16-ec940fba83b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "435fe835-6f0c-426f-bc16-ec940fba83b0" (UID: "435fe835-6f0c-426f-bc16-ec940fba83b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.383667 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435fe835-6f0c-426f-bc16-ec940fba83b0-config-data" (OuterVolumeSpecName: "config-data") pod "435fe835-6f0c-426f-bc16-ec940fba83b0" (UID: "435fe835-6f0c-426f-bc16-ec940fba83b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.439650 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pcnr\" (UniqueName: \"kubernetes.io/projected/435fe835-6f0c-426f-bc16-ec940fba83b0-kube-api-access-8pcnr\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.439698 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/435fe835-6f0c-426f-bc16-ec940fba83b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.439709 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435fe835-6f0c-426f-bc16-ec940fba83b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.454444 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.541650 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-config-data\") pod \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.542806 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-logs\") pod \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.542908 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-combined-ca-bundle\") pod \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.542954 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9ls5\" (UniqueName: \"kubernetes.io/projected/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-kube-api-access-n9ls5\") pod \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\" (UID: \"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8\") " Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.543309 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-logs" (OuterVolumeSpecName: "logs") pod "6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" (UID: "6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.543807 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.546263 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-kube-api-access-n9ls5" (OuterVolumeSpecName: "kube-api-access-n9ls5") pod "6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" (UID: "6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8"). InnerVolumeSpecName "kube-api-access-n9ls5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.567425 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" (UID: "6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.578094 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-config-data" (OuterVolumeSpecName: "config-data") pod "6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" (UID: "6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.645763 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.645987 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.646049 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9ls5\" (UniqueName: \"kubernetes.io/projected/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8-kube-api-access-n9ls5\") on node \"crc\" DevicePath \"\"" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.743552 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8","Type":"ContainerDied","Data":"1f76e12b847bb9af6cf41c00eae9c5275e33555c64e36fc3b5f2ae3798859981"} Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.743609 4768 scope.go:117] "RemoveContainer" containerID="a8cfd115dda30d626f1b52dff5ce5d6b5a22283b6f5ce046ceb5ad6442e50aef" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.743698 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.746269 4768 generic.go:334] "Generic (PLEG): container finished" podID="435fe835-6f0c-426f-bc16-ec940fba83b0" containerID="37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4" exitCode=0 Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.746562 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"435fe835-6f0c-426f-bc16-ec940fba83b0","Type":"ContainerDied","Data":"37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4"} Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.746644 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"435fe835-6f0c-426f-bc16-ec940fba83b0","Type":"ContainerDied","Data":"b1170e156f908450e96d3fda73fca86d5749c8a3679a49dbfb0ec06e1b11dba2"} Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.746790 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.783964 4768 scope.go:117] "RemoveContainer" containerID="ac4c8066a159441730610f0a5d7d0a0360ebde00a621351fba2cc6908817098e" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.831088 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.840036 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.850609 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 18:08:45 crc kubenswrapper[4768]: E1124 18:08:45.850993 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" containerName="nova-api-api" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.851015 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" containerName="nova-api-api" Nov 24 18:08:45 crc kubenswrapper[4768]: E1124 18:08:45.851025 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435fe835-6f0c-426f-bc16-ec940fba83b0" containerName="nova-scheduler-scheduler" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.851033 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="435fe835-6f0c-426f-bc16-ec940fba83b0" containerName="nova-scheduler-scheduler" Nov 24 18:08:45 crc kubenswrapper[4768]: E1124 18:08:45.851071 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" containerName="nova-api-log" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.851078 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" containerName="nova-api-log" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.851241 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="435fe835-6f0c-426f-bc16-ec940fba83b0" containerName="nova-scheduler-scheduler" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.851255 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" containerName="nova-api-api" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.851270 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" containerName="nova-api-log" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.852182 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.855650 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.861263 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.881607 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.885679 4768 scope.go:117] "RemoveContainer" containerID="37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.896590 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.916708 4768 scope.go:117] "RemoveContainer" containerID="37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4" Nov 24 18:08:45 crc kubenswrapper[4768]: E1124 18:08:45.919068 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4\": container with ID starting with 37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4 not found: ID does not exist" containerID="37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.919110 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4"} err="failed to get container status \"37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4\": rpc error: code = NotFound desc = could not find container \"37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4\": container with ID starting with 37c58b0fe0c0af28420e2f5f80db6bf1181437b1c937a0f95806eb14f59658d4 not found: ID does not exist" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.921605 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435fe835-6f0c-426f-bc16-ec940fba83b0" path="/var/lib/kubelet/pods/435fe835-6f0c-426f-bc16-ec940fba83b0/volumes" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.922276 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8" path="/var/lib/kubelet/pods/6ffe3834-bdd8-4d50-b5bf-18ebfbe3f2f8/volumes" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.923051 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.924419 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.931313 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.940405 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.955054 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdssz\" (UniqueName: \"kubernetes.io/projected/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-kube-api-access-fdssz\") pod \"nova-api-0\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " pod="openstack/nova-api-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.955118 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " pod="openstack/nova-api-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.955150 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67kbp\" (UniqueName: \"kubernetes.io/projected/1eb38082-24b5-4378-8e39-c19b29273ab9-kube-api-access-67kbp\") pod \"nova-scheduler-0\" (UID: \"1eb38082-24b5-4378-8e39-c19b29273ab9\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.955202 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb38082-24b5-4378-8e39-c19b29273ab9-config-data\") pod \"nova-scheduler-0\" (UID: \"1eb38082-24b5-4378-8e39-c19b29273ab9\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.955278 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-logs\") pod \"nova-api-0\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " pod="openstack/nova-api-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.955349 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-config-data\") pod \"nova-api-0\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " pod="openstack/nova-api-0" Nov 24 18:08:45 crc kubenswrapper[4768]: I1124 18:08:45.955378 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb38082-24b5-4378-8e39-c19b29273ab9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1eb38082-24b5-4378-8e39-c19b29273ab9\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.056799 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb38082-24b5-4378-8e39-c19b29273ab9-config-data\") pod \"nova-scheduler-0\" (UID: \"1eb38082-24b5-4378-8e39-c19b29273ab9\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.056895 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-logs\") pod \"nova-api-0\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " pod="openstack/nova-api-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.056953 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-config-data\") pod \"nova-api-0\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " pod="openstack/nova-api-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.056976 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb38082-24b5-4378-8e39-c19b29273ab9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1eb38082-24b5-4378-8e39-c19b29273ab9\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.057077 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdssz\" (UniqueName: \"kubernetes.io/projected/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-kube-api-access-fdssz\") pod \"nova-api-0\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " pod="openstack/nova-api-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.057112 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " pod="openstack/nova-api-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.057139 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67kbp\" (UniqueName: \"kubernetes.io/projected/1eb38082-24b5-4378-8e39-c19b29273ab9-kube-api-access-67kbp\") pod \"nova-scheduler-0\" (UID: \"1eb38082-24b5-4378-8e39-c19b29273ab9\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.058071 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-logs\") pod \"nova-api-0\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " pod="openstack/nova-api-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.061930 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb38082-24b5-4378-8e39-c19b29273ab9-config-data\") pod \"nova-scheduler-0\" (UID: \"1eb38082-24b5-4378-8e39-c19b29273ab9\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.062082 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb38082-24b5-4378-8e39-c19b29273ab9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1eb38082-24b5-4378-8e39-c19b29273ab9\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.062449 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-config-data\") pod \"nova-api-0\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " pod="openstack/nova-api-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.062767 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " pod="openstack/nova-api-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.077216 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67kbp\" (UniqueName: \"kubernetes.io/projected/1eb38082-24b5-4378-8e39-c19b29273ab9-kube-api-access-67kbp\") pod \"nova-scheduler-0\" (UID: \"1eb38082-24b5-4378-8e39-c19b29273ab9\") " pod="openstack/nova-scheduler-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.077853 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdssz\" (UniqueName: \"kubernetes.io/projected/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-kube-api-access-fdssz\") pod \"nova-api-0\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " pod="openstack/nova-api-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.176669 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.256253 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.762513 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb08325b-f3c3-424f-a7c3-5796cbd7edab","Type":"ContainerStarted","Data":"c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b"} Nov 24 18:08:46 crc kubenswrapper[4768]: W1124 18:08:46.801893 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19bcd3a9_f7ee_43ab_aa3d_c956b2b098a8.slice/crio-c1deb0bb4f8fcf5023149b8ac64ea2816a45cd6d606b887dd16a4c10e3e47ab4 WatchSource:0}: Error finding container c1deb0bb4f8fcf5023149b8ac64ea2816a45cd6d606b887dd16a4c10e3e47ab4: Status 404 returned error can't find the container with id c1deb0bb4f8fcf5023149b8ac64ea2816a45cd6d606b887dd16a4c10e3e47ab4 Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.802422 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:08:46 crc kubenswrapper[4768]: I1124 18:08:46.878651 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:08:47 crc kubenswrapper[4768]: I1124 18:08:47.113275 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 18:08:47 crc kubenswrapper[4768]: I1124 18:08:47.816662 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8","Type":"ContainerStarted","Data":"cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d"} Nov 24 18:08:47 crc kubenswrapper[4768]: I1124 18:08:47.817206 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8","Type":"ContainerStarted","Data":"c1deb0bb4f8fcf5023149b8ac64ea2816a45cd6d606b887dd16a4c10e3e47ab4"} Nov 24 18:08:47 crc kubenswrapper[4768]: I1124 18:08:47.834109 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1eb38082-24b5-4378-8e39-c19b29273ab9","Type":"ContainerStarted","Data":"43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68"} Nov 24 18:08:47 crc kubenswrapper[4768]: I1124 18:08:47.834161 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1eb38082-24b5-4378-8e39-c19b29273ab9","Type":"ContainerStarted","Data":"b315f2c001cf1461ac208f73c293f5ed6c192423e242e6e2d8784e58b7632f85"} Nov 24 18:08:47 crc kubenswrapper[4768]: I1124 18:08:47.834195 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 18:08:47 crc kubenswrapper[4768]: I1124 18:08:47.874629 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.417981316 podStartE2EDuration="8.874610226s" podCreationTimestamp="2025-11-24 18:08:39 +0000 UTC" firstStartedPulling="2025-11-24 18:08:40.838272617 +0000 UTC m=+1159.698854394" lastFinishedPulling="2025-11-24 18:08:46.294901527 +0000 UTC m=+1165.155483304" observedRunningTime="2025-11-24 18:08:47.869327699 +0000 UTC m=+1166.729909476" watchObservedRunningTime="2025-11-24 18:08:47.874610226 +0000 UTC m=+1166.735192003" Nov 24 18:08:47 crc kubenswrapper[4768]: I1124 18:08:47.897988 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8979706849999998 podStartE2EDuration="2.897970685s" podCreationTimestamp="2025-11-24 18:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:08:47.892192675 +0000 UTC m=+1166.752774452" watchObservedRunningTime="2025-11-24 18:08:47.897970685 +0000 UTC m=+1166.758552452" Nov 24 18:08:48 crc kubenswrapper[4768]: I1124 18:08:48.093466 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 18:08:48 crc kubenswrapper[4768]: I1124 18:08:48.094220 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 18:08:48 crc kubenswrapper[4768]: I1124 18:08:48.844283 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8","Type":"ContainerStarted","Data":"ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22"} Nov 24 18:08:48 crc kubenswrapper[4768]: I1124 18:08:48.865682 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.8656603389999997 podStartE2EDuration="3.865660339s" podCreationTimestamp="2025-11-24 18:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:08:48.857621366 +0000 UTC m=+1167.718203143" watchObservedRunningTime="2025-11-24 18:08:48.865660339 +0000 UTC m=+1167.726242116" Nov 24 18:08:51 crc kubenswrapper[4768]: I1124 18:08:51.257108 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 18:08:53 crc kubenswrapper[4768]: I1124 18:08:53.093798 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 18:08:53 crc kubenswrapper[4768]: I1124 18:08:53.094572 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 18:08:54 crc kubenswrapper[4768]: I1124 18:08:54.104926 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 18:08:54 crc kubenswrapper[4768]: I1124 18:08:54.105397 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 18:08:56 crc kubenswrapper[4768]: I1124 18:08:56.177736 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 18:08:56 crc kubenswrapper[4768]: I1124 18:08:56.178139 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 18:08:56 crc kubenswrapper[4768]: I1124 18:08:56.256957 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 18:08:56 crc kubenswrapper[4768]: I1124 18:08:56.282259 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 18:08:56 crc kubenswrapper[4768]: I1124 18:08:56.957540 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 18:08:57 crc kubenswrapper[4768]: I1124 18:08:57.259738 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.181:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 18:08:57 crc kubenswrapper[4768]: I1124 18:08:57.259738 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.181:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.102618 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.103239 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.112640 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.118452 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.844237 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.920835 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b18e38-4235-4db0-a265-a985463b5d5e-combined-ca-bundle\") pod \"33b18e38-4235-4db0-a265-a985463b5d5e\" (UID: \"33b18e38-4235-4db0-a265-a985463b5d5e\") " Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.921524 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b18e38-4235-4db0-a265-a985463b5d5e-config-data\") pod \"33b18e38-4235-4db0-a265-a985463b5d5e\" (UID: \"33b18e38-4235-4db0-a265-a985463b5d5e\") " Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.921697 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdvs5\" (UniqueName: \"kubernetes.io/projected/33b18e38-4235-4db0-a265-a985463b5d5e-kube-api-access-rdvs5\") pod \"33b18e38-4235-4db0-a265-a985463b5d5e\" (UID: \"33b18e38-4235-4db0-a265-a985463b5d5e\") " Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.927748 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b18e38-4235-4db0-a265-a985463b5d5e-kube-api-access-rdvs5" (OuterVolumeSpecName: "kube-api-access-rdvs5") pod "33b18e38-4235-4db0-a265-a985463b5d5e" (UID: "33b18e38-4235-4db0-a265-a985463b5d5e"). InnerVolumeSpecName "kube-api-access-rdvs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.947829 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33b18e38-4235-4db0-a265-a985463b5d5e-config-data" (OuterVolumeSpecName: "config-data") pod "33b18e38-4235-4db0-a265-a985463b5d5e" (UID: "33b18e38-4235-4db0-a265-a985463b5d5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.952857 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33b18e38-4235-4db0-a265-a985463b5d5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33b18e38-4235-4db0-a265-a985463b5d5e" (UID: "33b18e38-4235-4db0-a265-a985463b5d5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.987894 4768 generic.go:334] "Generic (PLEG): container finished" podID="33b18e38-4235-4db0-a265-a985463b5d5e" containerID="a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42" exitCode=137 Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.988002 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.988113 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"33b18e38-4235-4db0-a265-a985463b5d5e","Type":"ContainerDied","Data":"a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42"} Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.988163 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"33b18e38-4235-4db0-a265-a985463b5d5e","Type":"ContainerDied","Data":"31f37f703182c3583fc19f79d362124eecc4f53f6fc2e9ab5fed03fd097e1034"} Nov 24 18:09:03 crc kubenswrapper[4768]: I1124 18:09:03.988183 4768 scope.go:117] "RemoveContainer" containerID="a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.024759 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b18e38-4235-4db0-a265-a985463b5d5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.024807 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b18e38-4235-4db0-a265-a985463b5d5e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.024821 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdvs5\" (UniqueName: \"kubernetes.io/projected/33b18e38-4235-4db0-a265-a985463b5d5e-kube-api-access-rdvs5\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.044817 4768 scope.go:117] "RemoveContainer" containerID="a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42" Nov 24 18:09:04 crc kubenswrapper[4768]: E1124 18:09:04.045667 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42\": container with ID starting with a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42 not found: ID does not exist" containerID="a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.045728 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42"} err="failed to get container status \"a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42\": rpc error: code = NotFound desc = could not find container \"a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42\": container with ID starting with a55cef3b1559e80884c0081677650bf6cd2fce58c214867ceb3a63d6fb5a1c42 not found: ID does not exist" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.050153 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.063912 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.076602 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 18:09:04 crc kubenswrapper[4768]: E1124 18:09:04.077019 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b18e38-4235-4db0-a265-a985463b5d5e" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.077041 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b18e38-4235-4db0-a265-a985463b5d5e" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.077253 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b18e38-4235-4db0-a265-a985463b5d5e" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.077932 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.079809 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.080729 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.080798 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.087525 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.126395 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnvhf\" (UniqueName: \"kubernetes.io/projected/2f5e8953-6f74-4185-8020-585c1fc3d9f1-kube-api-access-tnvhf\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.126457 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5e8953-6f74-4185-8020-585c1fc3d9f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.126477 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5e8953-6f74-4185-8020-585c1fc3d9f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.126845 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f5e8953-6f74-4185-8020-585c1fc3d9f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.126870 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5e8953-6f74-4185-8020-585c1fc3d9f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.229173 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnvhf\" (UniqueName: \"kubernetes.io/projected/2f5e8953-6f74-4185-8020-585c1fc3d9f1-kube-api-access-tnvhf\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.229242 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5e8953-6f74-4185-8020-585c1fc3d9f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.229272 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5e8953-6f74-4185-8020-585c1fc3d9f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.229437 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f5e8953-6f74-4185-8020-585c1fc3d9f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.229462 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5e8953-6f74-4185-8020-585c1fc3d9f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.233261 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5e8953-6f74-4185-8020-585c1fc3d9f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.233303 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f5e8953-6f74-4185-8020-585c1fc3d9f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.233364 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5e8953-6f74-4185-8020-585c1fc3d9f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.234142 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5e8953-6f74-4185-8020-585c1fc3d9f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.246117 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnvhf\" (UniqueName: \"kubernetes.io/projected/2f5e8953-6f74-4185-8020-585c1fc3d9f1-kube-api-access-tnvhf\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f5e8953-6f74-4185-8020-585c1fc3d9f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.399198 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:04 crc kubenswrapper[4768]: I1124 18:09:04.849835 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 18:09:04 crc kubenswrapper[4768]: W1124 18:09:04.861700 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f5e8953_6f74_4185_8020_585c1fc3d9f1.slice/crio-277fc18b333aab6fa049d30db4f02370f970800951debba9b39e2a6a3687187d WatchSource:0}: Error finding container 277fc18b333aab6fa049d30db4f02370f970800951debba9b39e2a6a3687187d: Status 404 returned error can't find the container with id 277fc18b333aab6fa049d30db4f02370f970800951debba9b39e2a6a3687187d Nov 24 18:09:05 crc kubenswrapper[4768]: I1124 18:09:05.002938 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2f5e8953-6f74-4185-8020-585c1fc3d9f1","Type":"ContainerStarted","Data":"277fc18b333aab6fa049d30db4f02370f970800951debba9b39e2a6a3687187d"} Nov 24 18:09:05 crc kubenswrapper[4768]: I1124 18:09:05.910703 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b18e38-4235-4db0-a265-a985463b5d5e" path="/var/lib/kubelet/pods/33b18e38-4235-4db0-a265-a985463b5d5e/volumes" Nov 24 18:09:06 crc kubenswrapper[4768]: I1124 18:09:06.021715 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2f5e8953-6f74-4185-8020-585c1fc3d9f1","Type":"ContainerStarted","Data":"0a087161f7efc278a85ebca3164df06c6b0a8a665b241fa6cf693300d28fc5d2"} Nov 24 18:09:06 crc kubenswrapper[4768]: I1124 18:09:06.182185 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 18:09:06 crc kubenswrapper[4768]: I1124 18:09:06.182780 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 18:09:06 crc kubenswrapper[4768]: I1124 18:09:06.183759 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 18:09:06 crc kubenswrapper[4768]: I1124 18:09:06.187050 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 18:09:06 crc kubenswrapper[4768]: I1124 18:09:06.204885 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.204864372 podStartE2EDuration="2.204864372s" podCreationTimestamp="2025-11-24 18:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:09:06.045217532 +0000 UTC m=+1184.905799309" watchObservedRunningTime="2025-11-24 18:09:06.204864372 +0000 UTC m=+1185.065446149" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.029233 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.032578 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.216588 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-522bl"] Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.218596 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.286945 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.287010 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-dns-svc\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.287048 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.287120 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlp2h\" (UniqueName: \"kubernetes.io/projected/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-kube-api-access-wlp2h\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.287148 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-config\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.307366 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-522bl"] Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.391617 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.391689 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-dns-svc\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.391727 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.391806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlp2h\" (UniqueName: \"kubernetes.io/projected/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-kube-api-access-wlp2h\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.391834 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-config\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.392763 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.392764 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-dns-svc\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.392806 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-config\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.393447 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.431551 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlp2h\" (UniqueName: \"kubernetes.io/projected/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-kube-api-access-wlp2h\") pod \"dnsmasq-dns-5b856c5697-522bl\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.537965 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:07 crc kubenswrapper[4768]: I1124 18:09:07.993171 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-522bl"] Nov 24 18:09:07 crc kubenswrapper[4768]: W1124 18:09:07.995387 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98c4e1aa_5468_45fe_8b4b_71af1ea6d19b.slice/crio-4f07f27564d2aa48948aae44a0a76e1c2de0b89e14737d7569c62451a7c3135b WatchSource:0}: Error finding container 4f07f27564d2aa48948aae44a0a76e1c2de0b89e14737d7569c62451a7c3135b: Status 404 returned error can't find the container with id 4f07f27564d2aa48948aae44a0a76e1c2de0b89e14737d7569c62451a7c3135b Nov 24 18:09:08 crc kubenswrapper[4768]: I1124 18:09:08.037423 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-522bl" event={"ID":"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b","Type":"ContainerStarted","Data":"4f07f27564d2aa48948aae44a0a76e1c2de0b89e14737d7569c62451a7c3135b"} Nov 24 18:09:09 crc kubenswrapper[4768]: I1124 18:09:09.046205 4768 generic.go:334] "Generic (PLEG): container finished" podID="98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" containerID="bac150f2c7d5a3c89ba3271bc0f776e9ab4026be9a4f39d8326c4639483e193f" exitCode=0 Nov 24 18:09:09 crc kubenswrapper[4768]: I1124 18:09:09.046379 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-522bl" event={"ID":"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b","Type":"ContainerDied","Data":"bac150f2c7d5a3c89ba3271bc0f776e9ab4026be9a4f39d8326c4639483e193f"} Nov 24 18:09:09 crc kubenswrapper[4768]: I1124 18:09:09.399889 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:09 crc kubenswrapper[4768]: I1124 18:09:09.409647 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:09:09 crc kubenswrapper[4768]: I1124 18:09:09.410134 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="sg-core" containerID="cri-o://918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24" gracePeriod=30 Nov 24 18:09:09 crc kubenswrapper[4768]: I1124 18:09:09.410301 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="proxy-httpd" containerID="cri-o://c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b" gracePeriod=30 Nov 24 18:09:09 crc kubenswrapper[4768]: I1124 18:09:09.410383 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="ceilometer-notification-agent" containerID="cri-o://4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48" gracePeriod=30 Nov 24 18:09:09 crc kubenswrapper[4768]: I1124 18:09:09.410437 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="ceilometer-central-agent" containerID="cri-o://43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822" gracePeriod=30 Nov 24 18:09:09 crc kubenswrapper[4768]: I1124 18:09:09.418984 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.178:3000/\": EOF" Nov 24 18:09:09 crc kubenswrapper[4768]: I1124 18:09:09.626862 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:09:10 crc kubenswrapper[4768]: I1124 18:09:10.058857 4768 generic.go:334] "Generic (PLEG): container finished" podID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerID="c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b" exitCode=0 Nov 24 18:09:10 crc kubenswrapper[4768]: I1124 18:09:10.058906 4768 generic.go:334] "Generic (PLEG): container finished" podID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerID="918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24" exitCode=2 Nov 24 18:09:10 crc kubenswrapper[4768]: I1124 18:09:10.058923 4768 generic.go:334] "Generic (PLEG): container finished" podID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerID="43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822" exitCode=0 Nov 24 18:09:10 crc kubenswrapper[4768]: I1124 18:09:10.058948 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb08325b-f3c3-424f-a7c3-5796cbd7edab","Type":"ContainerDied","Data":"c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b"} Nov 24 18:09:10 crc kubenswrapper[4768]: I1124 18:09:10.059010 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb08325b-f3c3-424f-a7c3-5796cbd7edab","Type":"ContainerDied","Data":"918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24"} Nov 24 18:09:10 crc kubenswrapper[4768]: I1124 18:09:10.059032 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb08325b-f3c3-424f-a7c3-5796cbd7edab","Type":"ContainerDied","Data":"43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822"} Nov 24 18:09:10 crc kubenswrapper[4768]: I1124 18:09:10.061069 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-522bl" event={"ID":"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b","Type":"ContainerStarted","Data":"b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b"} Nov 24 18:09:10 crc kubenswrapper[4768]: I1124 18:09:10.061194 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" containerName="nova-api-log" containerID="cri-o://cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d" gracePeriod=30 Nov 24 18:09:10 crc kubenswrapper[4768]: I1124 18:09:10.061273 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" containerName="nova-api-api" containerID="cri-o://ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22" gracePeriod=30 Nov 24 18:09:10 crc kubenswrapper[4768]: I1124 18:09:10.091910 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-522bl" podStartSLOduration=3.091888921 podStartE2EDuration="3.091888921s" podCreationTimestamp="2025-11-24 18:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:09:10.084885137 +0000 UTC m=+1188.945466924" watchObservedRunningTime="2025-11-24 18:09:10.091888921 +0000 UTC m=+1188.952470718" Nov 24 18:09:10 crc kubenswrapper[4768]: I1124 18:09:10.111686 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.178:3000/\": dial tcp 10.217.0.178:3000: connect: connection refused" Nov 24 18:09:11 crc kubenswrapper[4768]: I1124 18:09:11.072277 4768 generic.go:334] "Generic (PLEG): container finished" podID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" containerID="cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d" exitCode=143 Nov 24 18:09:11 crc kubenswrapper[4768]: I1124 18:09:11.072391 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8","Type":"ContainerDied","Data":"cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d"} Nov 24 18:09:11 crc kubenswrapper[4768]: I1124 18:09:11.072795 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.709200 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.813810 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdssz\" (UniqueName: \"kubernetes.io/projected/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-kube-api-access-fdssz\") pod \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.813979 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-combined-ca-bundle\") pod \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.814227 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-logs\") pod \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.814281 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-config-data\") pod \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\" (UID: \"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8\") " Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.815074 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-logs" (OuterVolumeSpecName: "logs") pod "19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" (UID: "19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.824413 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-kube-api-access-fdssz" (OuterVolumeSpecName: "kube-api-access-fdssz") pod "19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" (UID: "19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8"). InnerVolumeSpecName "kube-api-access-fdssz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.848405 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-config-data" (OuterVolumeSpecName: "config-data") pod "19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" (UID: "19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.851367 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" (UID: "19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.918160 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.918199 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdssz\" (UniqueName: \"kubernetes.io/projected/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-kube-api-access-fdssz\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.918221 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:13 crc kubenswrapper[4768]: I1124 18:09:13.918237 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.100991 4768 generic.go:334] "Generic (PLEG): container finished" podID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" containerID="ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22" exitCode=0 Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.101039 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8","Type":"ContainerDied","Data":"ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22"} Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.101078 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8","Type":"ContainerDied","Data":"c1deb0bb4f8fcf5023149b8ac64ea2816a45cd6d606b887dd16a4c10e3e47ab4"} Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.101096 4768 scope.go:117] "RemoveContainer" containerID="ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.101105 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.125216 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.133163 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.136029 4768 scope.go:117] "RemoveContainer" containerID="cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.152773 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 18:09:14 crc kubenswrapper[4768]: E1124 18:09:14.153670 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" containerName="nova-api-api" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.153694 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" containerName="nova-api-api" Nov 24 18:09:14 crc kubenswrapper[4768]: E1124 18:09:14.153725 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" containerName="nova-api-log" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.153733 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" containerName="nova-api-log" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.153949 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" containerName="nova-api-log" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.153965 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" containerName="nova-api-api" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.155088 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.156690 4768 scope.go:117] "RemoveContainer" containerID="ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22" Nov 24 18:09:14 crc kubenswrapper[4768]: E1124 18:09:14.157244 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22\": container with ID starting with ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22 not found: ID does not exist" containerID="ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.157275 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22"} err="failed to get container status \"ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22\": rpc error: code = NotFound desc = could not find container \"ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22\": container with ID starting with ae91f7da71bb90c98d4c0b8a03bf3718bee3250c7bdb8323c9346b3479b2ed22 not found: ID does not exist" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.157300 4768 scope.go:117] "RemoveContainer" containerID="cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d" Nov 24 18:09:14 crc kubenswrapper[4768]: E1124 18:09:14.157978 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d\": container with ID starting with cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d not found: ID does not exist" containerID="cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.158089 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d"} err="failed to get container status \"cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d\": rpc error: code = NotFound desc = could not find container \"cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d\": container with ID starting with cd77cc43c4373400367a14c5afaa8f111f1c1f7436cbde21a696e273898a597d not found: ID does not exist" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.159964 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.160023 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.165161 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.168018 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.224723 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.224779 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-config-data\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.224838 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.224866 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c0b2190-f952-46d1-ace1-077ae4b4d860-logs\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.224884 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-public-tls-certs\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.224953 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf95h\" (UniqueName: \"kubernetes.io/projected/9c0b2190-f952-46d1-ace1-077ae4b4d860-kube-api-access-sf95h\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.326921 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf95h\" (UniqueName: \"kubernetes.io/projected/9c0b2190-f952-46d1-ace1-077ae4b4d860-kube-api-access-sf95h\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.327198 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.327294 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-config-data\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.327542 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.327639 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c0b2190-f952-46d1-ace1-077ae4b4d860-logs\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.327705 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-public-tls-certs\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.328415 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c0b2190-f952-46d1-ace1-077ae4b4d860-logs\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.332931 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.333789 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-public-tls-certs\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.335135 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.335389 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-config-data\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.348220 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf95h\" (UniqueName: \"kubernetes.io/projected/9c0b2190-f952-46d1-ace1-077ae4b4d860-kube-api-access-sf95h\") pod \"nova-api-0\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.399357 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.417935 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.493705 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.757231 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.836839 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-config-data\") pod \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.836883 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjfpx\" (UniqueName: \"kubernetes.io/projected/eb08325b-f3c3-424f-a7c3-5796cbd7edab-kube-api-access-bjfpx\") pod \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.836954 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb08325b-f3c3-424f-a7c3-5796cbd7edab-log-httpd\") pod \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.837033 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb08325b-f3c3-424f-a7c3-5796cbd7edab-run-httpd\") pod \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.837067 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-ceilometer-tls-certs\") pod \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.837097 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-sg-core-conf-yaml\") pod \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.837159 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-scripts\") pod \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.837200 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-combined-ca-bundle\") pod \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\" (UID: \"eb08325b-f3c3-424f-a7c3-5796cbd7edab\") " Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.837982 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb08325b-f3c3-424f-a7c3-5796cbd7edab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eb08325b-f3c3-424f-a7c3-5796cbd7edab" (UID: "eb08325b-f3c3-424f-a7c3-5796cbd7edab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.838025 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb08325b-f3c3-424f-a7c3-5796cbd7edab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eb08325b-f3c3-424f-a7c3-5796cbd7edab" (UID: "eb08325b-f3c3-424f-a7c3-5796cbd7edab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.842200 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb08325b-f3c3-424f-a7c3-5796cbd7edab-kube-api-access-bjfpx" (OuterVolumeSpecName: "kube-api-access-bjfpx") pod "eb08325b-f3c3-424f-a7c3-5796cbd7edab" (UID: "eb08325b-f3c3-424f-a7c3-5796cbd7edab"). InnerVolumeSpecName "kube-api-access-bjfpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.842757 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-scripts" (OuterVolumeSpecName: "scripts") pod "eb08325b-f3c3-424f-a7c3-5796cbd7edab" (UID: "eb08325b-f3c3-424f-a7c3-5796cbd7edab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.863099 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eb08325b-f3c3-424f-a7c3-5796cbd7edab" (UID: "eb08325b-f3c3-424f-a7c3-5796cbd7edab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.904946 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "eb08325b-f3c3-424f-a7c3-5796cbd7edab" (UID: "eb08325b-f3c3-424f-a7c3-5796cbd7edab"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.936977 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb08325b-f3c3-424f-a7c3-5796cbd7edab" (UID: "eb08325b-f3c3-424f-a7c3-5796cbd7edab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.939169 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.939194 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.939205 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjfpx\" (UniqueName: \"kubernetes.io/projected/eb08325b-f3c3-424f-a7c3-5796cbd7edab-kube-api-access-bjfpx\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.939214 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb08325b-f3c3-424f-a7c3-5796cbd7edab-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.939222 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb08325b-f3c3-424f-a7c3-5796cbd7edab-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.939230 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.939238 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.947088 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-config-data" (OuterVolumeSpecName: "config-data") pod "eb08325b-f3c3-424f-a7c3-5796cbd7edab" (UID: "eb08325b-f3c3-424f-a7c3-5796cbd7edab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:14 crc kubenswrapper[4768]: I1124 18:09:14.966654 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.041203 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb08325b-f3c3-424f-a7c3-5796cbd7edab-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.116182 4768 generic.go:334] "Generic (PLEG): container finished" podID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerID="4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48" exitCode=0 Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.116279 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb08325b-f3c3-424f-a7c3-5796cbd7edab","Type":"ContainerDied","Data":"4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48"} Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.116316 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb08325b-f3c3-424f-a7c3-5796cbd7edab","Type":"ContainerDied","Data":"e7c5ce32c45fccbe10ef60dde3432199370b00783d3e4b8543a8494351f1d897"} Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.116341 4768 scope.go:117] "RemoveContainer" containerID="c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.116348 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.122776 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c0b2190-f952-46d1-ace1-077ae4b4d860","Type":"ContainerStarted","Data":"a4d843be3224bdc90e5c90cd87718b15c5e11d595218de217e950eae73af8d5f"} Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.142551 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.153388 4768 scope.go:117] "RemoveContainer" containerID="918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.162317 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.178224 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.191602 4768 scope.go:117] "RemoveContainer" containerID="4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.210087 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:09:15 crc kubenswrapper[4768]: E1124 18:09:15.210547 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="ceilometer-notification-agent" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.210565 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="ceilometer-notification-agent" Nov 24 18:09:15 crc kubenswrapper[4768]: E1124 18:09:15.210585 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="sg-core" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.210593 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="sg-core" Nov 24 18:09:15 crc kubenswrapper[4768]: E1124 18:09:15.210624 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="proxy-httpd" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.210633 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="proxy-httpd" Nov 24 18:09:15 crc kubenswrapper[4768]: E1124 18:09:15.210645 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="ceilometer-central-agent" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.210653 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="ceilometer-central-agent" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.210876 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="sg-core" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.210897 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="ceilometer-notification-agent" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.210915 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="ceilometer-central-agent" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.210930 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" containerName="proxy-httpd" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.212962 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.216780 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.216978 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.217011 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.218925 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.237791 4768 scope.go:117] "RemoveContainer" containerID="43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.243934 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.244066 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.244104 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smfbb\" (UniqueName: \"kubernetes.io/projected/a1fee949-0151-40ec-9c6e-1554e2279306-kube-api-access-smfbb\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.244130 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-scripts\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.244265 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1fee949-0151-40ec-9c6e-1554e2279306-run-httpd\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.244302 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-config-data\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.244340 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.244376 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1fee949-0151-40ec-9c6e-1554e2279306-log-httpd\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.285773 4768 scope.go:117] "RemoveContainer" containerID="c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b" Nov 24 18:09:15 crc kubenswrapper[4768]: E1124 18:09:15.286215 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b\": container with ID starting with c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b not found: ID does not exist" containerID="c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.286262 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b"} err="failed to get container status \"c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b\": rpc error: code = NotFound desc = could not find container \"c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b\": container with ID starting with c3a519f75eb482ea596225f98615714c96172443437045f21a696f14aca5375b not found: ID does not exist" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.286294 4768 scope.go:117] "RemoveContainer" containerID="918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24" Nov 24 18:09:15 crc kubenswrapper[4768]: E1124 18:09:15.287843 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24\": container with ID starting with 918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24 not found: ID does not exist" containerID="918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.287885 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24"} err="failed to get container status \"918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24\": rpc error: code = NotFound desc = could not find container \"918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24\": container with ID starting with 918bc475d8a787433e6de941245cc606173f224b54595b0ac172966bee184b24 not found: ID does not exist" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.287911 4768 scope.go:117] "RemoveContainer" containerID="4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48" Nov 24 18:09:15 crc kubenswrapper[4768]: E1124 18:09:15.288754 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48\": container with ID starting with 4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48 not found: ID does not exist" containerID="4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.288795 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48"} err="failed to get container status \"4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48\": rpc error: code = NotFound desc = could not find container \"4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48\": container with ID starting with 4cdba2c5b95dd325440338ac08faabdeba7a2f025aa387f6269dbcb55bc33a48 not found: ID does not exist" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.288821 4768 scope.go:117] "RemoveContainer" containerID="43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822" Nov 24 18:09:15 crc kubenswrapper[4768]: E1124 18:09:15.289040 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822\": container with ID starting with 43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822 not found: ID does not exist" containerID="43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.289060 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822"} err="failed to get container status \"43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822\": rpc error: code = NotFound desc = could not find container \"43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822\": container with ID starting with 43803b65eb1299d0a3c8494d3be16f754fc9fbbc5a3ab40b9527fc1b7640b822 not found: ID does not exist" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.343496 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jmsvx"] Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.344718 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.347723 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.347948 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.350499 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1fee949-0151-40ec-9c6e-1554e2279306-run-httpd\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.350561 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-config-data\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.350602 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.350642 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1fee949-0151-40ec-9c6e-1554e2279306-log-httpd\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.350700 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.350739 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.350771 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smfbb\" (UniqueName: \"kubernetes.io/projected/a1fee949-0151-40ec-9c6e-1554e2279306-kube-api-access-smfbb\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.350797 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-scripts\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.351047 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1fee949-0151-40ec-9c6e-1554e2279306-run-httpd\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.352342 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1fee949-0151-40ec-9c6e-1554e2279306-log-httpd\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.355366 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jmsvx"] Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.358850 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.363678 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.366633 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.370870 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-config-data\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.371140 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smfbb\" (UniqueName: \"kubernetes.io/projected/a1fee949-0151-40ec-9c6e-1554e2279306-kube-api-access-smfbb\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.371402 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-scripts\") pod \"ceilometer-0\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.452309 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-config-data\") pod \"nova-cell1-cell-mapping-jmsvx\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.452639 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jmsvx\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.452726 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs5h9\" (UniqueName: \"kubernetes.io/projected/59de83f1-13f3-416c-a538-008cc9fb6d76-kube-api-access-gs5h9\") pod \"nova-cell1-cell-mapping-jmsvx\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.452956 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-scripts\") pod \"nova-cell1-cell-mapping-jmsvx\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.543473 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.554552 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-scripts\") pod \"nova-cell1-cell-mapping-jmsvx\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.554781 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-config-data\") pod \"nova-cell1-cell-mapping-jmsvx\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.555007 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jmsvx\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.555061 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs5h9\" (UniqueName: \"kubernetes.io/projected/59de83f1-13f3-416c-a538-008cc9fb6d76-kube-api-access-gs5h9\") pod \"nova-cell1-cell-mapping-jmsvx\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.559281 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-config-data\") pod \"nova-cell1-cell-mapping-jmsvx\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.559331 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jmsvx\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.561069 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-scripts\") pod \"nova-cell1-cell-mapping-jmsvx\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.577289 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs5h9\" (UniqueName: \"kubernetes.io/projected/59de83f1-13f3-416c-a538-008cc9fb6d76-kube-api-access-gs5h9\") pod \"nova-cell1-cell-mapping-jmsvx\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.799679 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.913751 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8" path="/var/lib/kubelet/pods/19bcd3a9-f7ee-43ab-aa3d-c956b2b098a8/volumes" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.914418 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb08325b-f3c3-424f-a7c3-5796cbd7edab" path="/var/lib/kubelet/pods/eb08325b-f3c3-424f-a7c3-5796cbd7edab/volumes" Nov 24 18:09:15 crc kubenswrapper[4768]: I1124 18:09:15.998989 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:09:16 crc kubenswrapper[4768]: I1124 18:09:16.132933 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1fee949-0151-40ec-9c6e-1554e2279306","Type":"ContainerStarted","Data":"d04483dc4f8b6d8a72be605e0beeed60423e07595c01de20780a8a47448ac924"} Nov 24 18:09:16 crc kubenswrapper[4768]: I1124 18:09:16.134935 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c0b2190-f952-46d1-ace1-077ae4b4d860","Type":"ContainerStarted","Data":"ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f"} Nov 24 18:09:16 crc kubenswrapper[4768]: I1124 18:09:16.134997 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c0b2190-f952-46d1-ace1-077ae4b4d860","Type":"ContainerStarted","Data":"abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6"} Nov 24 18:09:16 crc kubenswrapper[4768]: I1124 18:09:16.168730 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.168696068 podStartE2EDuration="2.168696068s" podCreationTimestamp="2025-11-24 18:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:09:16.159337849 +0000 UTC m=+1195.019919646" watchObservedRunningTime="2025-11-24 18:09:16.168696068 +0000 UTC m=+1195.029277865" Nov 24 18:09:16 crc kubenswrapper[4768]: I1124 18:09:16.238540 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jmsvx"] Nov 24 18:09:17 crc kubenswrapper[4768]: I1124 18:09:17.146788 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jmsvx" event={"ID":"59de83f1-13f3-416c-a538-008cc9fb6d76","Type":"ContainerStarted","Data":"0099633bf76be402d5195442a63d0ef90ffd809bf304d2248759712ff4b9006c"} Nov 24 18:09:17 crc kubenswrapper[4768]: I1124 18:09:17.147149 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jmsvx" event={"ID":"59de83f1-13f3-416c-a538-008cc9fb6d76","Type":"ContainerStarted","Data":"e151b696d1b7111b0946d0c3f4e142cefc7356acdddd0afbebd5dcb1f63a47b6"} Nov 24 18:09:17 crc kubenswrapper[4768]: I1124 18:09:17.149791 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1fee949-0151-40ec-9c6e-1554e2279306","Type":"ContainerStarted","Data":"81d6f4cf9f89c103d1c185e18d10d08ef9ded5976903ffd1d906d8bfc349b5ef"} Nov 24 18:09:17 crc kubenswrapper[4768]: I1124 18:09:17.169113 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jmsvx" podStartSLOduration=2.16909001 podStartE2EDuration="2.16909001s" podCreationTimestamp="2025-11-24 18:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:09:17.160526212 +0000 UTC m=+1196.021107989" watchObservedRunningTime="2025-11-24 18:09:17.16909001 +0000 UTC m=+1196.029671787" Nov 24 18:09:17 crc kubenswrapper[4768]: I1124 18:09:17.539635 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:09:17 crc kubenswrapper[4768]: I1124 18:09:17.615305 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-fltgw"] Nov 24 18:09:17 crc kubenswrapper[4768]: I1124 18:09:17.615803 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" podUID="46501d01-d421-402b-889b-6135c2c8ef8a" containerName="dnsmasq-dns" containerID="cri-o://dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13" gracePeriod=10 Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.075323 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.163105 4768 generic.go:334] "Generic (PLEG): container finished" podID="46501d01-d421-402b-889b-6135c2c8ef8a" containerID="dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13" exitCode=0 Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.163185 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" event={"ID":"46501d01-d421-402b-889b-6135c2c8ef8a","Type":"ContainerDied","Data":"dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13"} Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.163211 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.163237 4768 scope.go:117] "RemoveContainer" containerID="dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.163226 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" event={"ID":"46501d01-d421-402b-889b-6135c2c8ef8a","Type":"ContainerDied","Data":"78938fed3a0255362fa28a7e8a517374d96eb06ba4704dfeb016c99bf14fa9d8"} Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.167343 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1fee949-0151-40ec-9c6e-1554e2279306","Type":"ContainerStarted","Data":"c53e0604152b2d13e447c6824d9047eff5af845863468352f4a02e9e69565251"} Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.186995 4768 scope.go:117] "RemoveContainer" containerID="aa1073748b679b6861dd3338fd83362634999f78e5a485fa8adb1f2df8407987" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.213086 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-dns-svc\") pod \"46501d01-d421-402b-889b-6135c2c8ef8a\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.213205 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-config\") pod \"46501d01-d421-402b-889b-6135c2c8ef8a\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.213235 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6r7l\" (UniqueName: \"kubernetes.io/projected/46501d01-d421-402b-889b-6135c2c8ef8a-kube-api-access-r6r7l\") pod \"46501d01-d421-402b-889b-6135c2c8ef8a\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.213284 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-ovsdbserver-sb\") pod \"46501d01-d421-402b-889b-6135c2c8ef8a\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.213308 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-ovsdbserver-nb\") pod \"46501d01-d421-402b-889b-6135c2c8ef8a\" (UID: \"46501d01-d421-402b-889b-6135c2c8ef8a\") " Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.217458 4768 scope.go:117] "RemoveContainer" containerID="dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13" Nov 24 18:09:18 crc kubenswrapper[4768]: E1124 18:09:18.219223 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13\": container with ID starting with dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13 not found: ID does not exist" containerID="dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.220683 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13"} err="failed to get container status \"dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13\": rpc error: code = NotFound desc = could not find container \"dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13\": container with ID starting with dbaa3cab83c497c22319b76fd143876e6c5206ef1715da275bd399e4857aca13 not found: ID does not exist" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.220817 4768 scope.go:117] "RemoveContainer" containerID="aa1073748b679b6861dd3338fd83362634999f78e5a485fa8adb1f2df8407987" Nov 24 18:09:18 crc kubenswrapper[4768]: E1124 18:09:18.223964 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa1073748b679b6861dd3338fd83362634999f78e5a485fa8adb1f2df8407987\": container with ID starting with aa1073748b679b6861dd3338fd83362634999f78e5a485fa8adb1f2df8407987 not found: ID does not exist" containerID="aa1073748b679b6861dd3338fd83362634999f78e5a485fa8adb1f2df8407987" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.224026 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1073748b679b6861dd3338fd83362634999f78e5a485fa8adb1f2df8407987"} err="failed to get container status \"aa1073748b679b6861dd3338fd83362634999f78e5a485fa8adb1f2df8407987\": rpc error: code = NotFound desc = could not find container \"aa1073748b679b6861dd3338fd83362634999f78e5a485fa8adb1f2df8407987\": container with ID starting with aa1073748b679b6861dd3338fd83362634999f78e5a485fa8adb1f2df8407987 not found: ID does not exist" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.232191 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46501d01-d421-402b-889b-6135c2c8ef8a-kube-api-access-r6r7l" (OuterVolumeSpecName: "kube-api-access-r6r7l") pod "46501d01-d421-402b-889b-6135c2c8ef8a" (UID: "46501d01-d421-402b-889b-6135c2c8ef8a"). InnerVolumeSpecName "kube-api-access-r6r7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.294964 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46501d01-d421-402b-889b-6135c2c8ef8a" (UID: "46501d01-d421-402b-889b-6135c2c8ef8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.296636 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "46501d01-d421-402b-889b-6135c2c8ef8a" (UID: "46501d01-d421-402b-889b-6135c2c8ef8a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.298352 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "46501d01-d421-402b-889b-6135c2c8ef8a" (UID: "46501d01-d421-402b-889b-6135c2c8ef8a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.301179 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-config" (OuterVolumeSpecName: "config") pod "46501d01-d421-402b-889b-6135c2c8ef8a" (UID: "46501d01-d421-402b-889b-6135c2c8ef8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.316028 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.316074 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.316088 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6r7l\" (UniqueName: \"kubernetes.io/projected/46501d01-d421-402b-889b-6135c2c8ef8a-kube-api-access-r6r7l\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.316101 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.316112 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46501d01-d421-402b-889b-6135c2c8ef8a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.504536 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-fltgw"] Nov 24 18:09:18 crc kubenswrapper[4768]: I1124 18:09:18.511998 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-fltgw"] Nov 24 18:09:19 crc kubenswrapper[4768]: I1124 18:09:19.181521 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1fee949-0151-40ec-9c6e-1554e2279306","Type":"ContainerStarted","Data":"e69bae1c93e3efacb4eb74e45dc2663d4eebf35b375821c2f9cd6d5f63a9854e"} Nov 24 18:09:19 crc kubenswrapper[4768]: I1124 18:09:19.936284 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46501d01-d421-402b-889b-6135c2c8ef8a" path="/var/lib/kubelet/pods/46501d01-d421-402b-889b-6135c2c8ef8a/volumes" Nov 24 18:09:20 crc kubenswrapper[4768]: I1124 18:09:20.197728 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1fee949-0151-40ec-9c6e-1554e2279306","Type":"ContainerStarted","Data":"6ce3d787f405cb55f2496ac50e073ef9076246a1732c5891e4805da40731dea6"} Nov 24 18:09:20 crc kubenswrapper[4768]: I1124 18:09:20.198041 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 18:09:20 crc kubenswrapper[4768]: I1124 18:09:20.224142 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7648564420000001 podStartE2EDuration="5.224116671s" podCreationTimestamp="2025-11-24 18:09:15 +0000 UTC" firstStartedPulling="2025-11-24 18:09:16.002912117 +0000 UTC m=+1194.863493894" lastFinishedPulling="2025-11-24 18:09:19.462172346 +0000 UTC m=+1198.322754123" observedRunningTime="2025-11-24 18:09:20.219070631 +0000 UTC m=+1199.079652448" watchObservedRunningTime="2025-11-24 18:09:20.224116671 +0000 UTC m=+1199.084698488" Nov 24 18:09:22 crc kubenswrapper[4768]: I1124 18:09:22.224969 4768 generic.go:334] "Generic (PLEG): container finished" podID="59de83f1-13f3-416c-a538-008cc9fb6d76" containerID="0099633bf76be402d5195442a63d0ef90ffd809bf304d2248759712ff4b9006c" exitCode=0 Nov 24 18:09:22 crc kubenswrapper[4768]: I1124 18:09:22.225080 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jmsvx" event={"ID":"59de83f1-13f3-416c-a538-008cc9fb6d76","Type":"ContainerDied","Data":"0099633bf76be402d5195442a63d0ef90ffd809bf304d2248759712ff4b9006c"} Nov 24 18:09:22 crc kubenswrapper[4768]: I1124 18:09:22.988217 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-566b5b7845-fltgw" podUID="46501d01-d421-402b-889b-6135c2c8ef8a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: i/o timeout" Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.578512 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.730259 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-scripts\") pod \"59de83f1-13f3-416c-a538-008cc9fb6d76\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.730349 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-config-data\") pod \"59de83f1-13f3-416c-a538-008cc9fb6d76\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.730541 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs5h9\" (UniqueName: \"kubernetes.io/projected/59de83f1-13f3-416c-a538-008cc9fb6d76-kube-api-access-gs5h9\") pod \"59de83f1-13f3-416c-a538-008cc9fb6d76\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.730742 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-combined-ca-bundle\") pod \"59de83f1-13f3-416c-a538-008cc9fb6d76\" (UID: \"59de83f1-13f3-416c-a538-008cc9fb6d76\") " Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.738139 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-scripts" (OuterVolumeSpecName: "scripts") pod "59de83f1-13f3-416c-a538-008cc9fb6d76" (UID: "59de83f1-13f3-416c-a538-008cc9fb6d76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.738199 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59de83f1-13f3-416c-a538-008cc9fb6d76-kube-api-access-gs5h9" (OuterVolumeSpecName: "kube-api-access-gs5h9") pod "59de83f1-13f3-416c-a538-008cc9fb6d76" (UID: "59de83f1-13f3-416c-a538-008cc9fb6d76"). InnerVolumeSpecName "kube-api-access-gs5h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.760689 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-config-data" (OuterVolumeSpecName: "config-data") pod "59de83f1-13f3-416c-a538-008cc9fb6d76" (UID: "59de83f1-13f3-416c-a538-008cc9fb6d76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.788065 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59de83f1-13f3-416c-a538-008cc9fb6d76" (UID: "59de83f1-13f3-416c-a538-008cc9fb6d76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.841702 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs5h9\" (UniqueName: \"kubernetes.io/projected/59de83f1-13f3-416c-a538-008cc9fb6d76-kube-api-access-gs5h9\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.842010 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.842022 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:23 crc kubenswrapper[4768]: I1124 18:09:23.842031 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59de83f1-13f3-416c-a538-008cc9fb6d76-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:24 crc kubenswrapper[4768]: I1124 18:09:24.254913 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jmsvx" event={"ID":"59de83f1-13f3-416c-a538-008cc9fb6d76","Type":"ContainerDied","Data":"e151b696d1b7111b0946d0c3f4e142cefc7356acdddd0afbebd5dcb1f63a47b6"} Nov 24 18:09:24 crc kubenswrapper[4768]: I1124 18:09:24.255192 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e151b696d1b7111b0946d0c3f4e142cefc7356acdddd0afbebd5dcb1f63a47b6" Nov 24 18:09:24 crc kubenswrapper[4768]: I1124 18:09:24.254973 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jmsvx" Nov 24 18:09:24 crc kubenswrapper[4768]: I1124 18:09:24.432079 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:09:24 crc kubenswrapper[4768]: I1124 18:09:24.432618 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerName="nova-metadata-log" containerID="cri-o://2183e52e0e31f9affdb546caa2c49cd9253df65a3849760bf17010c659d6d6b3" gracePeriod=30 Nov 24 18:09:24 crc kubenswrapper[4768]: I1124 18:09:24.432691 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerName="nova-metadata-metadata" containerID="cri-o://40fd04b493c459aa04293c07a86b4daae3bd6802128b90cff665783fd72a3587" gracePeriod=30 Nov 24 18:09:24 crc kubenswrapper[4768]: I1124 18:09:24.444051 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:09:24 crc kubenswrapper[4768]: I1124 18:09:24.444413 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9c0b2190-f952-46d1-ace1-077ae4b4d860" containerName="nova-api-log" containerID="cri-o://abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6" gracePeriod=30 Nov 24 18:09:24 crc kubenswrapper[4768]: I1124 18:09:24.444550 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9c0b2190-f952-46d1-ace1-077ae4b4d860" containerName="nova-api-api" containerID="cri-o://ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f" gracePeriod=30 Nov 24 18:09:24 crc kubenswrapper[4768]: I1124 18:09:24.456049 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:09:24 crc kubenswrapper[4768]: I1124 18:09:24.456342 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1eb38082-24b5-4378-8e39-c19b29273ab9" containerName="nova-scheduler-scheduler" containerID="cri-o://43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68" gracePeriod=30 Nov 24 18:09:24 crc kubenswrapper[4768]: I1124 18:09:24.974518 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.067797 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-combined-ca-bundle\") pod \"9c0b2190-f952-46d1-ace1-077ae4b4d860\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.067861 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c0b2190-f952-46d1-ace1-077ae4b4d860-logs\") pod \"9c0b2190-f952-46d1-ace1-077ae4b4d860\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.067895 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf95h\" (UniqueName: \"kubernetes.io/projected/9c0b2190-f952-46d1-ace1-077ae4b4d860-kube-api-access-sf95h\") pod \"9c0b2190-f952-46d1-ace1-077ae4b4d860\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.067966 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-internal-tls-certs\") pod \"9c0b2190-f952-46d1-ace1-077ae4b4d860\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.068011 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-public-tls-certs\") pod \"9c0b2190-f952-46d1-ace1-077ae4b4d860\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.068174 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-config-data\") pod \"9c0b2190-f952-46d1-ace1-077ae4b4d860\" (UID: \"9c0b2190-f952-46d1-ace1-077ae4b4d860\") " Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.069992 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c0b2190-f952-46d1-ace1-077ae4b4d860-logs" (OuterVolumeSpecName: "logs") pod "9c0b2190-f952-46d1-ace1-077ae4b4d860" (UID: "9c0b2190-f952-46d1-ace1-077ae4b4d860"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.076024 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0b2190-f952-46d1-ace1-077ae4b4d860-kube-api-access-sf95h" (OuterVolumeSpecName: "kube-api-access-sf95h") pod "9c0b2190-f952-46d1-ace1-077ae4b4d860" (UID: "9c0b2190-f952-46d1-ace1-077ae4b4d860"). InnerVolumeSpecName "kube-api-access-sf95h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.095010 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-config-data" (OuterVolumeSpecName: "config-data") pod "9c0b2190-f952-46d1-ace1-077ae4b4d860" (UID: "9c0b2190-f952-46d1-ace1-077ae4b4d860"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.102087 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c0b2190-f952-46d1-ace1-077ae4b4d860" (UID: "9c0b2190-f952-46d1-ace1-077ae4b4d860"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.132653 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9c0b2190-f952-46d1-ace1-077ae4b4d860" (UID: "9c0b2190-f952-46d1-ace1-077ae4b4d860"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.142541 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9c0b2190-f952-46d1-ace1-077ae4b4d860" (UID: "9c0b2190-f952-46d1-ace1-077ae4b4d860"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.169646 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.169688 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.169700 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c0b2190-f952-46d1-ace1-077ae4b4d860-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.169712 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf95h\" (UniqueName: \"kubernetes.io/projected/9c0b2190-f952-46d1-ace1-077ae4b4d860-kube-api-access-sf95h\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.169721 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.169733 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0b2190-f952-46d1-ace1-077ae4b4d860-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.266331 4768 generic.go:334] "Generic (PLEG): container finished" podID="9c0b2190-f952-46d1-ace1-077ae4b4d860" containerID="ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f" exitCode=0 Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.266365 4768 generic.go:334] "Generic (PLEG): container finished" podID="9c0b2190-f952-46d1-ace1-077ae4b4d860" containerID="abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6" exitCode=143 Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.266395 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.266423 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c0b2190-f952-46d1-ace1-077ae4b4d860","Type":"ContainerDied","Data":"ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f"} Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.266454 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c0b2190-f952-46d1-ace1-077ae4b4d860","Type":"ContainerDied","Data":"abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6"} Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.266465 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c0b2190-f952-46d1-ace1-077ae4b4d860","Type":"ContainerDied","Data":"a4d843be3224bdc90e5c90cd87718b15c5e11d595218de217e950eae73af8d5f"} Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.266497 4768 scope.go:117] "RemoveContainer" containerID="ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.271890 4768 generic.go:334] "Generic (PLEG): container finished" podID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerID="2183e52e0e31f9affdb546caa2c49cd9253df65a3849760bf17010c659d6d6b3" exitCode=143 Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.271937 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ceef9077-3c84-430c-97e3-965f6eb58b7c","Type":"ContainerDied","Data":"2183e52e0e31f9affdb546caa2c49cd9253df65a3849760bf17010c659d6d6b3"} Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.290969 4768 scope.go:117] "RemoveContainer" containerID="abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.298681 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.308710 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.312623 4768 scope.go:117] "RemoveContainer" containerID="ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f" Nov 24 18:09:25 crc kubenswrapper[4768]: E1124 18:09:25.313051 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f\": container with ID starting with ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f not found: ID does not exist" containerID="ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.313099 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f"} err="failed to get container status \"ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f\": rpc error: code = NotFound desc = could not find container \"ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f\": container with ID starting with ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f not found: ID does not exist" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.313125 4768 scope.go:117] "RemoveContainer" containerID="abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6" Nov 24 18:09:25 crc kubenswrapper[4768]: E1124 18:09:25.313636 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6\": container with ID starting with abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6 not found: ID does not exist" containerID="abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.313675 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6"} err="failed to get container status \"abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6\": rpc error: code = NotFound desc = could not find container \"abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6\": container with ID starting with abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6 not found: ID does not exist" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.313701 4768 scope.go:117] "RemoveContainer" containerID="ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.314003 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f"} err="failed to get container status \"ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f\": rpc error: code = NotFound desc = could not find container \"ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f\": container with ID starting with ad057d6ab489f774cc725b7331fa60e0bf12bf248031628404b59ad819a8687f not found: ID does not exist" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.314064 4768 scope.go:117] "RemoveContainer" containerID="abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.314549 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6"} err="failed to get container status \"abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6\": rpc error: code = NotFound desc = could not find container \"abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6\": container with ID starting with abd0adea2783c2388feaece5bbba54623b2cd8da86d7a346359d262caf5433e6 not found: ID does not exist" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.335619 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 18:09:25 crc kubenswrapper[4768]: E1124 18:09:25.336199 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59de83f1-13f3-416c-a538-008cc9fb6d76" containerName="nova-manage" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.336329 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="59de83f1-13f3-416c-a538-008cc9fb6d76" containerName="nova-manage" Nov 24 18:09:25 crc kubenswrapper[4768]: E1124 18:09:25.336394 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46501d01-d421-402b-889b-6135c2c8ef8a" containerName="dnsmasq-dns" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.336444 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="46501d01-d421-402b-889b-6135c2c8ef8a" containerName="dnsmasq-dns" Nov 24 18:09:25 crc kubenswrapper[4768]: E1124 18:09:25.336525 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46501d01-d421-402b-889b-6135c2c8ef8a" containerName="init" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.336577 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="46501d01-d421-402b-889b-6135c2c8ef8a" containerName="init" Nov 24 18:09:25 crc kubenswrapper[4768]: E1124 18:09:25.336637 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0b2190-f952-46d1-ace1-077ae4b4d860" containerName="nova-api-api" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.336687 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0b2190-f952-46d1-ace1-077ae4b4d860" containerName="nova-api-api" Nov 24 18:09:25 crc kubenswrapper[4768]: E1124 18:09:25.336746 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0b2190-f952-46d1-ace1-077ae4b4d860" containerName="nova-api-log" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.336794 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0b2190-f952-46d1-ace1-077ae4b4d860" containerName="nova-api-log" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.336999 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="59de83f1-13f3-416c-a538-008cc9fb6d76" containerName="nova-manage" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.337079 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="46501d01-d421-402b-889b-6135c2c8ef8a" containerName="dnsmasq-dns" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.337136 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0b2190-f952-46d1-ace1-077ae4b4d860" containerName="nova-api-log" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.337199 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0b2190-f952-46d1-ace1-077ae4b4d860" containerName="nova-api-api" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.338167 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.340915 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.340948 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.340999 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.352154 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.476774 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09017e2b-873f-446e-9d2c-8dcdddb26732-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.476865 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09017e2b-873f-446e-9d2c-8dcdddb26732-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.476955 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbt5l\" (UniqueName: \"kubernetes.io/projected/09017e2b-873f-446e-9d2c-8dcdddb26732-kube-api-access-cbt5l\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.477086 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09017e2b-873f-446e-9d2c-8dcdddb26732-logs\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.477240 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09017e2b-873f-446e-9d2c-8dcdddb26732-config-data\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.477308 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09017e2b-873f-446e-9d2c-8dcdddb26732-public-tls-certs\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.579592 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09017e2b-873f-446e-9d2c-8dcdddb26732-logs\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.579703 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09017e2b-873f-446e-9d2c-8dcdddb26732-config-data\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.579756 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09017e2b-873f-446e-9d2c-8dcdddb26732-public-tls-certs\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.579812 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09017e2b-873f-446e-9d2c-8dcdddb26732-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.579845 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09017e2b-873f-446e-9d2c-8dcdddb26732-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.579874 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbt5l\" (UniqueName: \"kubernetes.io/projected/09017e2b-873f-446e-9d2c-8dcdddb26732-kube-api-access-cbt5l\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.580718 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09017e2b-873f-446e-9d2c-8dcdddb26732-logs\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.585256 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09017e2b-873f-446e-9d2c-8dcdddb26732-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.585310 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09017e2b-873f-446e-9d2c-8dcdddb26732-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.585327 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09017e2b-873f-446e-9d2c-8dcdddb26732-config-data\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.588015 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09017e2b-873f-446e-9d2c-8dcdddb26732-public-tls-certs\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.604035 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbt5l\" (UniqueName: \"kubernetes.io/projected/09017e2b-873f-446e-9d2c-8dcdddb26732-kube-api-access-cbt5l\") pod \"nova-api-0\" (UID: \"09017e2b-873f-446e-9d2c-8dcdddb26732\") " pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.670004 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 18:09:25 crc kubenswrapper[4768]: I1124 18:09:25.933769 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0b2190-f952-46d1-ace1-077ae4b4d860" path="/var/lib/kubelet/pods/9c0b2190-f952-46d1-ace1-077ae4b4d860/volumes" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.078995 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.109812 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.198359 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67kbp\" (UniqueName: \"kubernetes.io/projected/1eb38082-24b5-4378-8e39-c19b29273ab9-kube-api-access-67kbp\") pod \"1eb38082-24b5-4378-8e39-c19b29273ab9\" (UID: \"1eb38082-24b5-4378-8e39-c19b29273ab9\") " Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.198473 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb38082-24b5-4378-8e39-c19b29273ab9-combined-ca-bundle\") pod \"1eb38082-24b5-4378-8e39-c19b29273ab9\" (UID: \"1eb38082-24b5-4378-8e39-c19b29273ab9\") " Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.198569 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb38082-24b5-4378-8e39-c19b29273ab9-config-data\") pod \"1eb38082-24b5-4378-8e39-c19b29273ab9\" (UID: \"1eb38082-24b5-4378-8e39-c19b29273ab9\") " Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.207718 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb38082-24b5-4378-8e39-c19b29273ab9-kube-api-access-67kbp" (OuterVolumeSpecName: "kube-api-access-67kbp") pod "1eb38082-24b5-4378-8e39-c19b29273ab9" (UID: "1eb38082-24b5-4378-8e39-c19b29273ab9"). InnerVolumeSpecName "kube-api-access-67kbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.236196 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb38082-24b5-4378-8e39-c19b29273ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1eb38082-24b5-4378-8e39-c19b29273ab9" (UID: "1eb38082-24b5-4378-8e39-c19b29273ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.237519 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb38082-24b5-4378-8e39-c19b29273ab9-config-data" (OuterVolumeSpecName: "config-data") pod "1eb38082-24b5-4378-8e39-c19b29273ab9" (UID: "1eb38082-24b5-4378-8e39-c19b29273ab9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.286234 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09017e2b-873f-446e-9d2c-8dcdddb26732","Type":"ContainerStarted","Data":"90270fdf7c8577c3a3025fd33d5036f4e1bc737a1ec1e258f9b8d62c3f2dc787"} Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.286275 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09017e2b-873f-446e-9d2c-8dcdddb26732","Type":"ContainerStarted","Data":"4e619d70852c4a82774982b39a73a4b5aace21146b92607d59922f19bb4208d2"} Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.287645 4768 generic.go:334] "Generic (PLEG): container finished" podID="1eb38082-24b5-4378-8e39-c19b29273ab9" containerID="43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68" exitCode=0 Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.287691 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1eb38082-24b5-4378-8e39-c19b29273ab9","Type":"ContainerDied","Data":"43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68"} Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.287728 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1eb38082-24b5-4378-8e39-c19b29273ab9","Type":"ContainerDied","Data":"b315f2c001cf1461ac208f73c293f5ed6c192423e242e6e2d8784e58b7632f85"} Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.287747 4768 scope.go:117] "RemoveContainer" containerID="43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.287881 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.301306 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb38082-24b5-4378-8e39-c19b29273ab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.301344 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb38082-24b5-4378-8e39-c19b29273ab9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.301357 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67kbp\" (UniqueName: \"kubernetes.io/projected/1eb38082-24b5-4378-8e39-c19b29273ab9-kube-api-access-67kbp\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.305523 4768 scope.go:117] "RemoveContainer" containerID="43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68" Nov 24 18:09:26 crc kubenswrapper[4768]: E1124 18:09:26.306400 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68\": container with ID starting with 43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68 not found: ID does not exist" containerID="43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.306433 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68"} err="failed to get container status \"43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68\": rpc error: code = NotFound desc = could not find container \"43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68\": container with ID starting with 43ec9096513b2ab2d149c88ccc48758c86dd3d313afad7d693b39424893d3b68 not found: ID does not exist" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.326782 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.335784 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.357683 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:09:26 crc kubenswrapper[4768]: E1124 18:09:26.358366 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb38082-24b5-4378-8e39-c19b29273ab9" containerName="nova-scheduler-scheduler" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.358442 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb38082-24b5-4378-8e39-c19b29273ab9" containerName="nova-scheduler-scheduler" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.358708 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb38082-24b5-4378-8e39-c19b29273ab9" containerName="nova-scheduler-scheduler" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.359455 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.362269 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.382955 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.504129 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0653c2-07ff-4e12-a6ab-d1f1f81a5344-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba0653c2-07ff-4e12-a6ab-d1f1f81a5344\") " pod="openstack/nova-scheduler-0" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.504552 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crb7z\" (UniqueName: \"kubernetes.io/projected/ba0653c2-07ff-4e12-a6ab-d1f1f81a5344-kube-api-access-crb7z\") pod \"nova-scheduler-0\" (UID: \"ba0653c2-07ff-4e12-a6ab-d1f1f81a5344\") " pod="openstack/nova-scheduler-0" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.504581 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba0653c2-07ff-4e12-a6ab-d1f1f81a5344-config-data\") pod \"nova-scheduler-0\" (UID: \"ba0653c2-07ff-4e12-a6ab-d1f1f81a5344\") " pod="openstack/nova-scheduler-0" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.607003 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crb7z\" (UniqueName: \"kubernetes.io/projected/ba0653c2-07ff-4e12-a6ab-d1f1f81a5344-kube-api-access-crb7z\") pod \"nova-scheduler-0\" (UID: \"ba0653c2-07ff-4e12-a6ab-d1f1f81a5344\") " pod="openstack/nova-scheduler-0" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.607069 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba0653c2-07ff-4e12-a6ab-d1f1f81a5344-config-data\") pod \"nova-scheduler-0\" (UID: \"ba0653c2-07ff-4e12-a6ab-d1f1f81a5344\") " pod="openstack/nova-scheduler-0" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.607164 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0653c2-07ff-4e12-a6ab-d1f1f81a5344-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba0653c2-07ff-4e12-a6ab-d1f1f81a5344\") " pod="openstack/nova-scheduler-0" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.611689 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba0653c2-07ff-4e12-a6ab-d1f1f81a5344-config-data\") pod \"nova-scheduler-0\" (UID: \"ba0653c2-07ff-4e12-a6ab-d1f1f81a5344\") " pod="openstack/nova-scheduler-0" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.613150 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0653c2-07ff-4e12-a6ab-d1f1f81a5344-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba0653c2-07ff-4e12-a6ab-d1f1f81a5344\") " pod="openstack/nova-scheduler-0" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.634559 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crb7z\" (UniqueName: \"kubernetes.io/projected/ba0653c2-07ff-4e12-a6ab-d1f1f81a5344-kube-api-access-crb7z\") pod \"nova-scheduler-0\" (UID: \"ba0653c2-07ff-4e12-a6ab-d1f1f81a5344\") " pod="openstack/nova-scheduler-0" Nov 24 18:09:26 crc kubenswrapper[4768]: I1124 18:09:26.696045 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 18:09:27 crc kubenswrapper[4768]: I1124 18:09:27.103347 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 18:09:27 crc kubenswrapper[4768]: W1124 18:09:27.107801 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba0653c2_07ff_4e12_a6ab_d1f1f81a5344.slice/crio-c342a2aa9d5afc60aa5a2927ae15d1b97ec714e31f516dc181df17d3658594ae WatchSource:0}: Error finding container c342a2aa9d5afc60aa5a2927ae15d1b97ec714e31f516dc181df17d3658594ae: Status 404 returned error can't find the container with id c342a2aa9d5afc60aa5a2927ae15d1b97ec714e31f516dc181df17d3658594ae Nov 24 18:09:27 crc kubenswrapper[4768]: I1124 18:09:27.296929 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09017e2b-873f-446e-9d2c-8dcdddb26732","Type":"ContainerStarted","Data":"6c1f01b2c8ff70fa81627f474ba726061a4ee9d902d0fb86473271884d0d93ec"} Nov 24 18:09:27 crc kubenswrapper[4768]: I1124 18:09:27.299610 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ba0653c2-07ff-4e12-a6ab-d1f1f81a5344","Type":"ContainerStarted","Data":"8d1fbfae69a645e25e683d06f595a74a037927625d96f2ed6e90edb35d0066bc"} Nov 24 18:09:27 crc kubenswrapper[4768]: I1124 18:09:27.299683 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ba0653c2-07ff-4e12-a6ab-d1f1f81a5344","Type":"ContainerStarted","Data":"c342a2aa9d5afc60aa5a2927ae15d1b97ec714e31f516dc181df17d3658594ae"} Nov 24 18:09:27 crc kubenswrapper[4768]: I1124 18:09:27.329118 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.329103812 podStartE2EDuration="2.329103812s" podCreationTimestamp="2025-11-24 18:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:09:27.322432927 +0000 UTC m=+1206.183014704" watchObservedRunningTime="2025-11-24 18:09:27.329103812 +0000 UTC m=+1206.189685589" Nov 24 18:09:27 crc kubenswrapper[4768]: I1124 18:09:27.342166 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.342142674 podStartE2EDuration="1.342142674s" podCreationTimestamp="2025-11-24 18:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:09:27.340141678 +0000 UTC m=+1206.200723475" watchObservedRunningTime="2025-11-24 18:09:27.342142674 +0000 UTC m=+1206.202724451" Nov 24 18:09:27 crc kubenswrapper[4768]: I1124 18:09:27.910434 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb38082-24b5-4378-8e39-c19b29273ab9" path="/var/lib/kubelet/pods/1eb38082-24b5-4378-8e39-c19b29273ab9/volumes" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.094092 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": dial tcp 10.217.0.180:8775: connect: connection refused" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.094215 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": dial tcp 10.217.0.180:8775: connect: connection refused" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.317531 4768 generic.go:334] "Generic (PLEG): container finished" podID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerID="40fd04b493c459aa04293c07a86b4daae3bd6802128b90cff665783fd72a3587" exitCode=0 Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.319649 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ceef9077-3c84-430c-97e3-965f6eb58b7c","Type":"ContainerDied","Data":"40fd04b493c459aa04293c07a86b4daae3bd6802128b90cff665783fd72a3587"} Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.724509 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.850861 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-combined-ca-bundle\") pod \"ceef9077-3c84-430c-97e3-965f6eb58b7c\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.851059 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceef9077-3c84-430c-97e3-965f6eb58b7c-logs\") pod \"ceef9077-3c84-430c-97e3-965f6eb58b7c\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.851168 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-nova-metadata-tls-certs\") pod \"ceef9077-3c84-430c-97e3-965f6eb58b7c\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.851216 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v92jv\" (UniqueName: \"kubernetes.io/projected/ceef9077-3c84-430c-97e3-965f6eb58b7c-kube-api-access-v92jv\") pod \"ceef9077-3c84-430c-97e3-965f6eb58b7c\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.851317 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-config-data\") pod \"ceef9077-3c84-430c-97e3-965f6eb58b7c\" (UID: \"ceef9077-3c84-430c-97e3-965f6eb58b7c\") " Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.851929 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceef9077-3c84-430c-97e3-965f6eb58b7c-logs" (OuterVolumeSpecName: "logs") pod "ceef9077-3c84-430c-97e3-965f6eb58b7c" (UID: "ceef9077-3c84-430c-97e3-965f6eb58b7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.857331 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceef9077-3c84-430c-97e3-965f6eb58b7c-kube-api-access-v92jv" (OuterVolumeSpecName: "kube-api-access-v92jv") pod "ceef9077-3c84-430c-97e3-965f6eb58b7c" (UID: "ceef9077-3c84-430c-97e3-965f6eb58b7c"). InnerVolumeSpecName "kube-api-access-v92jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.884994 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-config-data" (OuterVolumeSpecName: "config-data") pod "ceef9077-3c84-430c-97e3-965f6eb58b7c" (UID: "ceef9077-3c84-430c-97e3-965f6eb58b7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.886695 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ceef9077-3c84-430c-97e3-965f6eb58b7c" (UID: "ceef9077-3c84-430c-97e3-965f6eb58b7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.905751 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ceef9077-3c84-430c-97e3-965f6eb58b7c" (UID: "ceef9077-3c84-430c-97e3-965f6eb58b7c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.953511 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.953558 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.953572 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceef9077-3c84-430c-97e3-965f6eb58b7c-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.953583 4768 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceef9077-3c84-430c-97e3-965f6eb58b7c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:28 crc kubenswrapper[4768]: I1124 18:09:28.953593 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v92jv\" (UniqueName: \"kubernetes.io/projected/ceef9077-3c84-430c-97e3-965f6eb58b7c-kube-api-access-v92jv\") on node \"crc\" DevicePath \"\"" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.328416 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ceef9077-3c84-430c-97e3-965f6eb58b7c","Type":"ContainerDied","Data":"ea1935e2513e6c07f65ab87c57602e0fabe086c5669349033563355b51292f4f"} Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.328459 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.328468 4768 scope.go:117] "RemoveContainer" containerID="40fd04b493c459aa04293c07a86b4daae3bd6802128b90cff665783fd72a3587" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.381180 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.381356 4768 scope.go:117] "RemoveContainer" containerID="2183e52e0e31f9affdb546caa2c49cd9253df65a3849760bf17010c659d6d6b3" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.416923 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.427225 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:09:29 crc kubenswrapper[4768]: E1124 18:09:29.427684 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerName="nova-metadata-metadata" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.427703 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerName="nova-metadata-metadata" Nov 24 18:09:29 crc kubenswrapper[4768]: E1124 18:09:29.427723 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerName="nova-metadata-log" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.427729 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerName="nova-metadata-log" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.427887 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerName="nova-metadata-metadata" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.427908 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" containerName="nova-metadata-log" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.428973 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.431778 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.431876 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.436990 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.575441 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26f8aa-16b5-445c-9568-4e56b3665004-logs\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.575588 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e26f8aa-16b5-445c-9568-4e56b3665004-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.575802 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jrnb\" (UniqueName: \"kubernetes.io/projected/9e26f8aa-16b5-445c-9568-4e56b3665004-kube-api-access-9jrnb\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.575872 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e26f8aa-16b5-445c-9568-4e56b3665004-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.576308 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e26f8aa-16b5-445c-9568-4e56b3665004-config-data\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.679386 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e26f8aa-16b5-445c-9568-4e56b3665004-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.679551 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e26f8aa-16b5-445c-9568-4e56b3665004-config-data\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.679626 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26f8aa-16b5-445c-9568-4e56b3665004-logs\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.679659 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e26f8aa-16b5-445c-9568-4e56b3665004-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.679713 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jrnb\" (UniqueName: \"kubernetes.io/projected/9e26f8aa-16b5-445c-9568-4e56b3665004-kube-api-access-9jrnb\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.681773 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26f8aa-16b5-445c-9568-4e56b3665004-logs\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.686205 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e26f8aa-16b5-445c-9568-4e56b3665004-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.688300 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e26f8aa-16b5-445c-9568-4e56b3665004-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.689691 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e26f8aa-16b5-445c-9568-4e56b3665004-config-data\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.704069 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jrnb\" (UniqueName: \"kubernetes.io/projected/9e26f8aa-16b5-445c-9568-4e56b3665004-kube-api-access-9jrnb\") pod \"nova-metadata-0\" (UID: \"9e26f8aa-16b5-445c-9568-4e56b3665004\") " pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.752063 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 18:09:29 crc kubenswrapper[4768]: I1124 18:09:29.910435 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceef9077-3c84-430c-97e3-965f6eb58b7c" path="/var/lib/kubelet/pods/ceef9077-3c84-430c-97e3-965f6eb58b7c/volumes" Nov 24 18:09:30 crc kubenswrapper[4768]: W1124 18:09:30.267253 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e26f8aa_16b5_445c_9568_4e56b3665004.slice/crio-d922290ae3c8d69a5ab11518b27a1f304984bffaf5ea312c6c6fc46c27fbe881 WatchSource:0}: Error finding container d922290ae3c8d69a5ab11518b27a1f304984bffaf5ea312c6c6fc46c27fbe881: Status 404 returned error can't find the container with id d922290ae3c8d69a5ab11518b27a1f304984bffaf5ea312c6c6fc46c27fbe881 Nov 24 18:09:30 crc kubenswrapper[4768]: I1124 18:09:30.267949 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 18:09:30 crc kubenswrapper[4768]: I1124 18:09:30.343233 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e26f8aa-16b5-445c-9568-4e56b3665004","Type":"ContainerStarted","Data":"d922290ae3c8d69a5ab11518b27a1f304984bffaf5ea312c6c6fc46c27fbe881"} Nov 24 18:09:31 crc kubenswrapper[4768]: I1124 18:09:31.357719 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e26f8aa-16b5-445c-9568-4e56b3665004","Type":"ContainerStarted","Data":"222038331ab18603fe509fdece9bd9b99722fcffe9db254962bbd41eae8ce0d3"} Nov 24 18:09:31 crc kubenswrapper[4768]: I1124 18:09:31.358045 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e26f8aa-16b5-445c-9568-4e56b3665004","Type":"ContainerStarted","Data":"075d409f83a537a9d1e4eee458d0e360e966761236f8a101d491623577f6b555"} Nov 24 18:09:31 crc kubenswrapper[4768]: I1124 18:09:31.391353 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.391332693 podStartE2EDuration="2.391332693s" podCreationTimestamp="2025-11-24 18:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:09:31.385113171 +0000 UTC m=+1210.245694948" watchObservedRunningTime="2025-11-24 18:09:31.391332693 +0000 UTC m=+1210.251914470" Nov 24 18:09:31 crc kubenswrapper[4768]: I1124 18:09:31.696930 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 18:09:34 crc kubenswrapper[4768]: I1124 18:09:34.752127 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 18:09:34 crc kubenswrapper[4768]: I1124 18:09:34.752449 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 18:09:35 crc kubenswrapper[4768]: I1124 18:09:35.670714 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 18:09:35 crc kubenswrapper[4768]: I1124 18:09:35.671147 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 18:09:36 crc kubenswrapper[4768]: I1124 18:09:36.690763 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09017e2b-873f-446e-9d2c-8dcdddb26732" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 18:09:36 crc kubenswrapper[4768]: I1124 18:09:36.690825 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09017e2b-873f-446e-9d2c-8dcdddb26732" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 18:09:36 crc kubenswrapper[4768]: I1124 18:09:36.696931 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 18:09:36 crc kubenswrapper[4768]: I1124 18:09:36.721032 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 18:09:37 crc kubenswrapper[4768]: I1124 18:09:37.468154 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 18:09:39 crc kubenswrapper[4768]: I1124 18:09:39.752960 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 18:09:39 crc kubenswrapper[4768]: I1124 18:09:39.753008 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 18:09:40 crc kubenswrapper[4768]: I1124 18:09:40.767758 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9e26f8aa-16b5-445c-9568-4e56b3665004" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 18:09:40 crc kubenswrapper[4768]: I1124 18:09:40.767766 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9e26f8aa-16b5-445c-9568-4e56b3665004" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 18:09:45 crc kubenswrapper[4768]: I1124 18:09:45.553889 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 18:09:45 crc kubenswrapper[4768]: I1124 18:09:45.677874 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 18:09:45 crc kubenswrapper[4768]: I1124 18:09:45.678535 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 18:09:45 crc kubenswrapper[4768]: I1124 18:09:45.678704 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 18:09:45 crc kubenswrapper[4768]: I1124 18:09:45.685544 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 18:09:46 crc kubenswrapper[4768]: I1124 18:09:46.509082 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 18:09:46 crc kubenswrapper[4768]: I1124 18:09:46.516633 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 18:09:49 crc kubenswrapper[4768]: I1124 18:09:49.757387 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 18:09:49 crc kubenswrapper[4768]: I1124 18:09:49.758068 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 18:09:49 crc kubenswrapper[4768]: I1124 18:09:49.766150 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 18:09:49 crc kubenswrapper[4768]: I1124 18:09:49.766686 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 18:09:57 crc kubenswrapper[4768]: I1124 18:09:57.756590 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 18:09:58 crc kubenswrapper[4768]: I1124 18:09:58.628364 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 18:10:01 crc kubenswrapper[4768]: I1124 18:10:01.755697 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f67f41ac-4a1d-45c4-baaf-500062871fcb" containerName="rabbitmq" containerID="cri-o://4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22" gracePeriod=604797 Nov 24 18:10:02 crc kubenswrapper[4768]: I1124 18:10:02.529515 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="96e8147b-fab1-4601-b8c7-00764af14ba7" containerName="rabbitmq" containerID="cri-o://fb25e3f1702fc6111b530154f86d335b831c203bc5818f6f7da298bb2061ef6b" gracePeriod=604797 Nov 24 18:10:03 crc kubenswrapper[4768]: I1124 18:10:03.862687 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f67f41ac-4a1d-45c4-baaf-500062871fcb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 24 18:10:04 crc kubenswrapper[4768]: I1124 18:10:04.140841 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="96e8147b-fab1-4601-b8c7-00764af14ba7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.301142 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.435981 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-server-conf\") pod \"f67f41ac-4a1d-45c4-baaf-500062871fcb\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.436018 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f67f41ac-4a1d-45c4-baaf-500062871fcb\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.436045 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-confd\") pod \"f67f41ac-4a1d-45c4-baaf-500062871fcb\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.436107 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-config-data\") pod \"f67f41ac-4a1d-45c4-baaf-500062871fcb\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.436134 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-plugins\") pod \"f67f41ac-4a1d-45c4-baaf-500062871fcb\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.436155 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f67f41ac-4a1d-45c4-baaf-500062871fcb-erlang-cookie-secret\") pod \"f67f41ac-4a1d-45c4-baaf-500062871fcb\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.436172 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f67f41ac-4a1d-45c4-baaf-500062871fcb-pod-info\") pod \"f67f41ac-4a1d-45c4-baaf-500062871fcb\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.436197 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcrrl\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-kube-api-access-mcrrl\") pod \"f67f41ac-4a1d-45c4-baaf-500062871fcb\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.436263 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-plugins-conf\") pod \"f67f41ac-4a1d-45c4-baaf-500062871fcb\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.436297 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-tls\") pod \"f67f41ac-4a1d-45c4-baaf-500062871fcb\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.436378 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-erlang-cookie\") pod \"f67f41ac-4a1d-45c4-baaf-500062871fcb\" (UID: \"f67f41ac-4a1d-45c4-baaf-500062871fcb\") " Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.437364 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f67f41ac-4a1d-45c4-baaf-500062871fcb" (UID: "f67f41ac-4a1d-45c4-baaf-500062871fcb"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.438345 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f67f41ac-4a1d-45c4-baaf-500062871fcb" (UID: "f67f41ac-4a1d-45c4-baaf-500062871fcb"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.442615 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67f41ac-4a1d-45c4-baaf-500062871fcb-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f67f41ac-4a1d-45c4-baaf-500062871fcb" (UID: "f67f41ac-4a1d-45c4-baaf-500062871fcb"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.443011 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-kube-api-access-mcrrl" (OuterVolumeSpecName: "kube-api-access-mcrrl") pod "f67f41ac-4a1d-45c4-baaf-500062871fcb" (UID: "f67f41ac-4a1d-45c4-baaf-500062871fcb"). InnerVolumeSpecName "kube-api-access-mcrrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.443046 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f67f41ac-4a1d-45c4-baaf-500062871fcb" (UID: "f67f41ac-4a1d-45c4-baaf-500062871fcb"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.443385 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f67f41ac-4a1d-45c4-baaf-500062871fcb" (UID: "f67f41ac-4a1d-45c4-baaf-500062871fcb"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.446940 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "f67f41ac-4a1d-45c4-baaf-500062871fcb" (UID: "f67f41ac-4a1d-45c4-baaf-500062871fcb"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.449031 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f67f41ac-4a1d-45c4-baaf-500062871fcb-pod-info" (OuterVolumeSpecName: "pod-info") pod "f67f41ac-4a1d-45c4-baaf-500062871fcb" (UID: "f67f41ac-4a1d-45c4-baaf-500062871fcb"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.470546 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-config-data" (OuterVolumeSpecName: "config-data") pod "f67f41ac-4a1d-45c4-baaf-500062871fcb" (UID: "f67f41ac-4a1d-45c4-baaf-500062871fcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.508787 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-server-conf" (OuterVolumeSpecName: "server-conf") pod "f67f41ac-4a1d-45c4-baaf-500062871fcb" (UID: "f67f41ac-4a1d-45c4-baaf-500062871fcb"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.538316 4768 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.538356 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.538371 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.538401 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.538413 4768 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.538424 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f67f41ac-4a1d-45c4-baaf-500062871fcb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.538436 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.538446 4768 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f67f41ac-4a1d-45c4-baaf-500062871fcb-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.538456 4768 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f67f41ac-4a1d-45c4-baaf-500062871fcb-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.538466 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcrrl\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-kube-api-access-mcrrl\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.548574 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f67f41ac-4a1d-45c4-baaf-500062871fcb" (UID: "f67f41ac-4a1d-45c4-baaf-500062871fcb"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.560246 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.641698 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.641740 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f67f41ac-4a1d-45c4-baaf-500062871fcb-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.749292 4768 generic.go:334] "Generic (PLEG): container finished" podID="f67f41ac-4a1d-45c4-baaf-500062871fcb" containerID="4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22" exitCode=0 Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.749332 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f67f41ac-4a1d-45c4-baaf-500062871fcb","Type":"ContainerDied","Data":"4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22"} Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.749378 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f67f41ac-4a1d-45c4-baaf-500062871fcb","Type":"ContainerDied","Data":"f95e1bbdaeb935ca0649e2d67369388443e77579b144542c9afa98a356d06b35"} Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.749370 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.749393 4768 scope.go:117] "RemoveContainer" containerID="4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.787095 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.797977 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.817269 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 18:10:08 crc kubenswrapper[4768]: E1124 18:10:08.817662 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67f41ac-4a1d-45c4-baaf-500062871fcb" containerName="rabbitmq" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.817679 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67f41ac-4a1d-45c4-baaf-500062871fcb" containerName="rabbitmq" Nov 24 18:10:08 crc kubenswrapper[4768]: E1124 18:10:08.817710 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67f41ac-4a1d-45c4-baaf-500062871fcb" containerName="setup-container" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.817717 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67f41ac-4a1d-45c4-baaf-500062871fcb" containerName="setup-container" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.817898 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67f41ac-4a1d-45c4-baaf-500062871fcb" containerName="rabbitmq" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.820642 4768 scope.go:117] "RemoveContainer" containerID="c79650b18ecd2360097a631b234f9877cee7111a7b8d25423597bf6bc329515b" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.820906 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.823847 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.825264 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.825381 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.825503 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.825631 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.825825 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.825980 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mn6tk" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.831168 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.854322 4768 scope.go:117] "RemoveContainer" containerID="4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22" Nov 24 18:10:08 crc kubenswrapper[4768]: E1124 18:10:08.860085 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22\": container with ID starting with 4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22 not found: ID does not exist" containerID="4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.860136 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22"} err="failed to get container status \"4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22\": rpc error: code = NotFound desc = could not find container \"4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22\": container with ID starting with 4eae463f0f253b08c3368a4ad5b02c2d20046b97f34347119e8889a33b533e22 not found: ID does not exist" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.860163 4768 scope.go:117] "RemoveContainer" containerID="c79650b18ecd2360097a631b234f9877cee7111a7b8d25423597bf6bc329515b" Nov 24 18:10:08 crc kubenswrapper[4768]: E1124 18:10:08.860655 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c79650b18ecd2360097a631b234f9877cee7111a7b8d25423597bf6bc329515b\": container with ID starting with c79650b18ecd2360097a631b234f9877cee7111a7b8d25423597bf6bc329515b not found: ID does not exist" containerID="c79650b18ecd2360097a631b234f9877cee7111a7b8d25423597bf6bc329515b" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.860702 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79650b18ecd2360097a631b234f9877cee7111a7b8d25423597bf6bc329515b"} err="failed to get container status \"c79650b18ecd2360097a631b234f9877cee7111a7b8d25423597bf6bc329515b\": rpc error: code = NotFound desc = could not find container \"c79650b18ecd2360097a631b234f9877cee7111a7b8d25423597bf6bc329515b\": container with ID starting with c79650b18ecd2360097a631b234f9877cee7111a7b8d25423597bf6bc329515b not found: ID does not exist" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.956164 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9f4m\" (UniqueName: \"kubernetes.io/projected/2d3ded99-92ff-43cc-83de-6042d6c83acf-kube-api-access-w9f4m\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.956253 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d3ded99-92ff-43cc-83de-6042d6c83acf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.956420 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.956448 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d3ded99-92ff-43cc-83de-6042d6c83acf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.956470 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d3ded99-92ff-43cc-83de-6042d6c83acf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.956535 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d3ded99-92ff-43cc-83de-6042d6c83acf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.956553 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d3ded99-92ff-43cc-83de-6042d6c83acf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.956582 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d3ded99-92ff-43cc-83de-6042d6c83acf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.956598 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d3ded99-92ff-43cc-83de-6042d6c83acf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.956627 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d3ded99-92ff-43cc-83de-6042d6c83acf-config-data\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:08 crc kubenswrapper[4768]: I1124 18:10:08.956647 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d3ded99-92ff-43cc-83de-6042d6c83acf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.058229 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d3ded99-92ff-43cc-83de-6042d6c83acf-config-data\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.058600 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d3ded99-92ff-43cc-83de-6042d6c83acf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.058626 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9f4m\" (UniqueName: \"kubernetes.io/projected/2d3ded99-92ff-43cc-83de-6042d6c83acf-kube-api-access-w9f4m\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.058666 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d3ded99-92ff-43cc-83de-6042d6c83acf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.058741 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.058760 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d3ded99-92ff-43cc-83de-6042d6c83acf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.058780 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d3ded99-92ff-43cc-83de-6042d6c83acf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.058822 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d3ded99-92ff-43cc-83de-6042d6c83acf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.058836 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d3ded99-92ff-43cc-83de-6042d6c83acf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.058865 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d3ded99-92ff-43cc-83de-6042d6c83acf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.058880 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d3ded99-92ff-43cc-83de-6042d6c83acf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.059966 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d3ded99-92ff-43cc-83de-6042d6c83acf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.060585 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d3ded99-92ff-43cc-83de-6042d6c83acf-config-data\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.060908 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d3ded99-92ff-43cc-83de-6042d6c83acf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.062322 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d3ded99-92ff-43cc-83de-6042d6c83acf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.062520 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.063448 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d3ded99-92ff-43cc-83de-6042d6c83acf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.066259 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d3ded99-92ff-43cc-83de-6042d6c83acf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.066369 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d3ded99-92ff-43cc-83de-6042d6c83acf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.067188 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d3ded99-92ff-43cc-83de-6042d6c83acf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.073739 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d3ded99-92ff-43cc-83de-6042d6c83acf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.089613 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9f4m\" (UniqueName: \"kubernetes.io/projected/2d3ded99-92ff-43cc-83de-6042d6c83acf-kube-api-access-w9f4m\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.115169 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"2d3ded99-92ff-43cc-83de-6042d6c83acf\") " pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.147398 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.581466 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.763986 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2d3ded99-92ff-43cc-83de-6042d6c83acf","Type":"ContainerStarted","Data":"6bb29918b7d0396663d8f33563963978cf415ead1df4032944de038479c54220"} Nov 24 18:10:09 crc kubenswrapper[4768]: I1124 18:10:09.909295 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f67f41ac-4a1d-45c4-baaf-500062871fcb" path="/var/lib/kubelet/pods/f67f41ac-4a1d-45c4-baaf-500062871fcb/volumes" Nov 24 18:10:14 crc kubenswrapper[4768]: I1124 18:10:14.141136 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="96e8147b-fab1-4601-b8c7-00764af14ba7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Nov 24 18:10:16 crc kubenswrapper[4768]: I1124 18:10:16.551635 4768 generic.go:334] "Generic (PLEG): container finished" podID="96e8147b-fab1-4601-b8c7-00764af14ba7" containerID="fb25e3f1702fc6111b530154f86d335b831c203bc5818f6f7da298bb2061ef6b" exitCode=-1 Nov 24 18:10:16 crc kubenswrapper[4768]: I1124 18:10:16.551853 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96e8147b-fab1-4601-b8c7-00764af14ba7","Type":"ContainerDied","Data":"fb25e3f1702fc6111b530154f86d335b831c203bc5818f6f7da298bb2061ef6b"} Nov 24 18:10:16 crc kubenswrapper[4768]: I1124 18:10:16.951556 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.019789 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-config-data\") pod \"96e8147b-fab1-4601-b8c7-00764af14ba7\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.019851 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-erlang-cookie\") pod \"96e8147b-fab1-4601-b8c7-00764af14ba7\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.019918 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96e8147b-fab1-4601-b8c7-00764af14ba7-pod-info\") pod \"96e8147b-fab1-4601-b8c7-00764af14ba7\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.019940 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-plugins-conf\") pod \"96e8147b-fab1-4601-b8c7-00764af14ba7\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.019967 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-server-conf\") pod \"96e8147b-fab1-4601-b8c7-00764af14ba7\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.020046 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96e8147b-fab1-4601-b8c7-00764af14ba7-erlang-cookie-secret\") pod \"96e8147b-fab1-4601-b8c7-00764af14ba7\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.020087 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-plugins\") pod \"96e8147b-fab1-4601-b8c7-00764af14ba7\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.020169 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-tls\") pod \"96e8147b-fab1-4601-b8c7-00764af14ba7\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.020217 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwvnw\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-kube-api-access-jwvnw\") pod \"96e8147b-fab1-4601-b8c7-00764af14ba7\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.020243 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-confd\") pod \"96e8147b-fab1-4601-b8c7-00764af14ba7\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.020294 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"96e8147b-fab1-4601-b8c7-00764af14ba7\" (UID: \"96e8147b-fab1-4601-b8c7-00764af14ba7\") " Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.021043 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "96e8147b-fab1-4601-b8c7-00764af14ba7" (UID: "96e8147b-fab1-4601-b8c7-00764af14ba7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.021860 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "96e8147b-fab1-4601-b8c7-00764af14ba7" (UID: "96e8147b-fab1-4601-b8c7-00764af14ba7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.026431 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "96e8147b-fab1-4601-b8c7-00764af14ba7" (UID: "96e8147b-fab1-4601-b8c7-00764af14ba7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.038535 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e8147b-fab1-4601-b8c7-00764af14ba7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "96e8147b-fab1-4601-b8c7-00764af14ba7" (UID: "96e8147b-fab1-4601-b8c7-00764af14ba7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.038608 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "96e8147b-fab1-4601-b8c7-00764af14ba7" (UID: "96e8147b-fab1-4601-b8c7-00764af14ba7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.038635 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/96e8147b-fab1-4601-b8c7-00764af14ba7-pod-info" (OuterVolumeSpecName: "pod-info") pod "96e8147b-fab1-4601-b8c7-00764af14ba7" (UID: "96e8147b-fab1-4601-b8c7-00764af14ba7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.038710 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-kube-api-access-jwvnw" (OuterVolumeSpecName: "kube-api-access-jwvnw") pod "96e8147b-fab1-4601-b8c7-00764af14ba7" (UID: "96e8147b-fab1-4601-b8c7-00764af14ba7"). InnerVolumeSpecName "kube-api-access-jwvnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.052589 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "96e8147b-fab1-4601-b8c7-00764af14ba7" (UID: "96e8147b-fab1-4601-b8c7-00764af14ba7"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.086637 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-config-data" (OuterVolumeSpecName: "config-data") pod "96e8147b-fab1-4601-b8c7-00764af14ba7" (UID: "96e8147b-fab1-4601-b8c7-00764af14ba7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.112010 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-server-conf" (OuterVolumeSpecName: "server-conf") pod "96e8147b-fab1-4601-b8c7-00764af14ba7" (UID: "96e8147b-fab1-4601-b8c7-00764af14ba7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.122395 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.122433 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.122444 4768 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96e8147b-fab1-4601-b8c7-00764af14ba7-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.122452 4768 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.122460 4768 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96e8147b-fab1-4601-b8c7-00764af14ba7-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.122467 4768 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96e8147b-fab1-4601-b8c7-00764af14ba7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.122475 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.122487 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.122513 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwvnw\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-kube-api-access-jwvnw\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.122538 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.146214 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.208610 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "96e8147b-fab1-4601-b8c7-00764af14ba7" (UID: "96e8147b-fab1-4601-b8c7-00764af14ba7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.223873 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96e8147b-fab1-4601-b8c7-00764af14ba7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.223908 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.568173 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2d3ded99-92ff-43cc-83de-6042d6c83acf","Type":"ContainerStarted","Data":"3fc089219835810795365b2350eea34d1e58e22ad1e46ea293bfd53bdff6cebc"} Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.570738 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96e8147b-fab1-4601-b8c7-00764af14ba7","Type":"ContainerDied","Data":"c65e5810d33f33b3c2f8a887ae2bf700b4b2eb2b9361687ff6bef594fa6f2a93"} Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.570786 4768 scope.go:117] "RemoveContainer" containerID="fb25e3f1702fc6111b530154f86d335b831c203bc5818f6f7da298bb2061ef6b" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.570801 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.593173 4768 scope.go:117] "RemoveContainer" containerID="a4aa0bb200172f83176cd90f33b02eadaee041ecd11044f7965416b7cf3adf3d" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.624596 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.634302 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.656432 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 18:10:17 crc kubenswrapper[4768]: E1124 18:10:17.657180 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e8147b-fab1-4601-b8c7-00764af14ba7" containerName="rabbitmq" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.657271 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e8147b-fab1-4601-b8c7-00764af14ba7" containerName="rabbitmq" Nov 24 18:10:17 crc kubenswrapper[4768]: E1124 18:10:17.657360 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e8147b-fab1-4601-b8c7-00764af14ba7" containerName="setup-container" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.657430 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e8147b-fab1-4601-b8c7-00764af14ba7" containerName="setup-container" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.657728 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e8147b-fab1-4601-b8c7-00764af14ba7" containerName="rabbitmq" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.658921 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.663890 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.664214 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.664624 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.664763 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.666479 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.666651 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.666825 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-l62mf" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.690218 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.731147 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.731226 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.731288 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.731324 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.731347 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.731660 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.731855 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwbnj\" (UniqueName: \"kubernetes.io/projected/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-kube-api-access-jwbnj\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.731915 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.732000 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.732027 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.732109 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.833944 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.833988 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.834017 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.834044 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.834063 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.834086 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.834109 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.834124 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.834199 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.834255 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwbnj\" (UniqueName: \"kubernetes.io/projected/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-kube-api-access-jwbnj\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.834280 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.834529 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.834749 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.835210 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.835242 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.835277 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.836178 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.840981 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.841760 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.841768 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.842010 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.854629 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwbnj\" (UniqueName: \"kubernetes.io/projected/f61bf1e8-52b3-4777-ad9b-52c8a1cad06c-kube-api-access-jwbnj\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.866006 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.925057 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e8147b-fab1-4601-b8c7-00764af14ba7" path="/var/lib/kubelet/pods/96e8147b-fab1-4601-b8c7-00764af14ba7/volumes" Nov 24 18:10:17 crc kubenswrapper[4768]: I1124 18:10:17.985265 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:18 crc kubenswrapper[4768]: I1124 18:10:18.442872 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 18:10:18 crc kubenswrapper[4768]: I1124 18:10:18.583349 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c","Type":"ContainerStarted","Data":"bb32c1ed464d6b43221efe1e3ba5352c22a5bd340097aba9939dd864295e27bd"} Nov 24 18:10:20 crc kubenswrapper[4768]: I1124 18:10:20.609966 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c","Type":"ContainerStarted","Data":"ee56884daa16b1af854cfa3d2734cd92eef280084c49930342b37f35a3cab567"} Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.126742 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-8x7k5"] Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.128436 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.131754 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.152032 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-8x7k5"] Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.200711 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.200750 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.200788 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-config\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.201079 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8z79\" (UniqueName: \"kubernetes.io/projected/827d75ee-d24d-4bef-b476-ca47dff711b6-kube-api-access-f8z79\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.201243 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.201350 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.208989 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-8x7k5"] Nov 24 18:10:21 crc kubenswrapper[4768]: E1124 18:10:21.209646 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-f8z79 openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" podUID="827d75ee-d24d-4bef-b476-ca47dff711b6" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.236182 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-kg8vc"] Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.238036 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.248323 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-kg8vc"] Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.303486 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7z26\" (UniqueName: \"kubernetes.io/projected/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-kube-api-access-f7z26\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.303596 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.303626 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.303671 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8z79\" (UniqueName: \"kubernetes.io/projected/827d75ee-d24d-4bef-b476-ca47dff711b6-kube-api-access-f8z79\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.303712 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.303745 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.303806 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.303849 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.303872 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.303903 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-config\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.303940 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-config\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.303972 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.304732 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.304753 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.304989 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.305208 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.305347 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-config\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.324847 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8z79\" (UniqueName: \"kubernetes.io/projected/827d75ee-d24d-4bef-b476-ca47dff711b6-kube-api-access-f8z79\") pod \"dnsmasq-dns-6447ccbd8f-8x7k5\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.405149 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.405209 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.405308 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.405358 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-config\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.405400 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.405455 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7z26\" (UniqueName: \"kubernetes.io/projected/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-kube-api-access-f7z26\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.406191 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.406253 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.406550 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.407724 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-config\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.407863 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.423971 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7z26\" (UniqueName: \"kubernetes.io/projected/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-kube-api-access-f7z26\") pod \"dnsmasq-dns-864d5fc68c-kg8vc\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.555253 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.626100 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.659838 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.709344 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-openstack-edpm-ipam\") pod \"827d75ee-d24d-4bef-b476-ca47dff711b6\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.709437 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-dns-svc\") pod \"827d75ee-d24d-4bef-b476-ca47dff711b6\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.709473 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-config\") pod \"827d75ee-d24d-4bef-b476-ca47dff711b6\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.709643 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-ovsdbserver-nb\") pod \"827d75ee-d24d-4bef-b476-ca47dff711b6\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.709756 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-ovsdbserver-sb\") pod \"827d75ee-d24d-4bef-b476-ca47dff711b6\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.709894 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8z79\" (UniqueName: \"kubernetes.io/projected/827d75ee-d24d-4bef-b476-ca47dff711b6-kube-api-access-f8z79\") pod \"827d75ee-d24d-4bef-b476-ca47dff711b6\" (UID: \"827d75ee-d24d-4bef-b476-ca47dff711b6\") " Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.710032 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "827d75ee-d24d-4bef-b476-ca47dff711b6" (UID: "827d75ee-d24d-4bef-b476-ca47dff711b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.710067 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "827d75ee-d24d-4bef-b476-ca47dff711b6" (UID: "827d75ee-d24d-4bef-b476-ca47dff711b6"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.710171 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-config" (OuterVolumeSpecName: "config") pod "827d75ee-d24d-4bef-b476-ca47dff711b6" (UID: "827d75ee-d24d-4bef-b476-ca47dff711b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.710470 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "827d75ee-d24d-4bef-b476-ca47dff711b6" (UID: "827d75ee-d24d-4bef-b476-ca47dff711b6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.710556 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "827d75ee-d24d-4bef-b476-ca47dff711b6" (UID: "827d75ee-d24d-4bef-b476-ca47dff711b6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.710733 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.710748 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.710761 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.710773 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.710784 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/827d75ee-d24d-4bef-b476-ca47dff711b6-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.771835 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827d75ee-d24d-4bef-b476-ca47dff711b6-kube-api-access-f8z79" (OuterVolumeSpecName: "kube-api-access-f8z79") pod "827d75ee-d24d-4bef-b476-ca47dff711b6" (UID: "827d75ee-d24d-4bef-b476-ca47dff711b6"). InnerVolumeSpecName "kube-api-access-f8z79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.818055 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8z79\" (UniqueName: \"kubernetes.io/projected/827d75ee-d24d-4bef-b476-ca47dff711b6-kube-api-access-f8z79\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:21 crc kubenswrapper[4768]: I1124 18:10:21.933056 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-kg8vc"] Nov 24 18:10:22 crc kubenswrapper[4768]: I1124 18:10:22.637111 4768 generic.go:334] "Generic (PLEG): container finished" podID="e0edf9a4-37b3-4519-84ca-2c4fce4c0808" containerID="6c1703fe1fa7e0cc4999a5c66fc2c4b1cde37a276233e6f3e65db73b6c319a29" exitCode=0 Nov 24 18:10:22 crc kubenswrapper[4768]: I1124 18:10:22.637196 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" event={"ID":"e0edf9a4-37b3-4519-84ca-2c4fce4c0808","Type":"ContainerDied","Data":"6c1703fe1fa7e0cc4999a5c66fc2c4b1cde37a276233e6f3e65db73b6c319a29"} Nov 24 18:10:22 crc kubenswrapper[4768]: I1124 18:10:22.637416 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-8x7k5" Nov 24 18:10:22 crc kubenswrapper[4768]: I1124 18:10:22.637460 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" event={"ID":"e0edf9a4-37b3-4519-84ca-2c4fce4c0808","Type":"ContainerStarted","Data":"8b4f3d1acf54158bb92e31855ac9cc7e545fada8f2362bb8483fc0eb1c835aba"} Nov 24 18:10:22 crc kubenswrapper[4768]: I1124 18:10:22.731939 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-8x7k5"] Nov 24 18:10:22 crc kubenswrapper[4768]: I1124 18:10:22.739319 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-8x7k5"] Nov 24 18:10:23 crc kubenswrapper[4768]: I1124 18:10:23.660059 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" event={"ID":"e0edf9a4-37b3-4519-84ca-2c4fce4c0808","Type":"ContainerStarted","Data":"58361ec7b1e7b265454a61e5ae93f7eea8623cfa5e0e8beba17f76dddc8663d4"} Nov 24 18:10:23 crc kubenswrapper[4768]: I1124 18:10:23.660528 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:23 crc kubenswrapper[4768]: I1124 18:10:23.702230 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" podStartSLOduration=2.70219546 podStartE2EDuration="2.70219546s" podCreationTimestamp="2025-11-24 18:10:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:10:23.687810027 +0000 UTC m=+1262.548391834" watchObservedRunningTime="2025-11-24 18:10:23.70219546 +0000 UTC m=+1262.562777287" Nov 24 18:10:23 crc kubenswrapper[4768]: I1124 18:10:23.911117 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="827d75ee-d24d-4bef-b476-ca47dff711b6" path="/var/lib/kubelet/pods/827d75ee-d24d-4bef-b476-ca47dff711b6/volumes" Nov 24 18:10:24 crc kubenswrapper[4768]: E1124 18:10:24.807935 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod827d75ee_d24d_4bef_b476_ca47dff711b6.slice\": RecentStats: unable to find data in memory cache]" Nov 24 18:10:31 crc kubenswrapper[4768]: I1124 18:10:31.557744 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:10:31 crc kubenswrapper[4768]: I1124 18:10:31.619561 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-522bl"] Nov 24 18:10:31 crc kubenswrapper[4768]: I1124 18:10:31.622784 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-522bl" podUID="98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" containerName="dnsmasq-dns" containerID="cri-o://b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b" gracePeriod=10 Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.069450 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.121943 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlp2h\" (UniqueName: \"kubernetes.io/projected/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-kube-api-access-wlp2h\") pod \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.121997 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-ovsdbserver-nb\") pod \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.122099 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-dns-svc\") pod \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.122214 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-ovsdbserver-sb\") pod \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.122268 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-config\") pod \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\" (UID: \"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b\") " Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.135192 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-kube-api-access-wlp2h" (OuterVolumeSpecName: "kube-api-access-wlp2h") pod "98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" (UID: "98c4e1aa-5468-45fe-8b4b-71af1ea6d19b"). InnerVolumeSpecName "kube-api-access-wlp2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.172698 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" (UID: "98c4e1aa-5468-45fe-8b4b-71af1ea6d19b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.175458 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-config" (OuterVolumeSpecName: "config") pod "98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" (UID: "98c4e1aa-5468-45fe-8b4b-71af1ea6d19b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.177547 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" (UID: "98c4e1aa-5468-45fe-8b4b-71af1ea6d19b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.181523 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" (UID: "98c4e1aa-5468-45fe-8b4b-71af1ea6d19b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.224059 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.224096 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.224108 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlp2h\" (UniqueName: \"kubernetes.io/projected/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-kube-api-access-wlp2h\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.224119 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.224127 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.738421 4768 generic.go:334] "Generic (PLEG): container finished" podID="98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" containerID="b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b" exitCode=0 Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.738475 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-522bl" event={"ID":"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b","Type":"ContainerDied","Data":"b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b"} Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.738610 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-522bl" event={"ID":"98c4e1aa-5468-45fe-8b4b-71af1ea6d19b","Type":"ContainerDied","Data":"4f07f27564d2aa48948aae44a0a76e1c2de0b89e14737d7569c62451a7c3135b"} Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.738633 4768 scope.go:117] "RemoveContainer" containerID="b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.738817 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-522bl" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.779201 4768 scope.go:117] "RemoveContainer" containerID="bac150f2c7d5a3c89ba3271bc0f776e9ab4026be9a4f39d8326c4639483e193f" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.793241 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-522bl"] Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.805334 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-522bl"] Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.823400 4768 scope.go:117] "RemoveContainer" containerID="b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b" Nov 24 18:10:32 crc kubenswrapper[4768]: E1124 18:10:32.823888 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b\": container with ID starting with b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b not found: ID does not exist" containerID="b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.823942 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b"} err="failed to get container status \"b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b\": rpc error: code = NotFound desc = could not find container \"b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b\": container with ID starting with b3abf5967559f440ee20dfc358b32519120327288989b9f7fb8da7698dffd23b not found: ID does not exist" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.823969 4768 scope.go:117] "RemoveContainer" containerID="bac150f2c7d5a3c89ba3271bc0f776e9ab4026be9a4f39d8326c4639483e193f" Nov 24 18:10:32 crc kubenswrapper[4768]: E1124 18:10:32.824422 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac150f2c7d5a3c89ba3271bc0f776e9ab4026be9a4f39d8326c4639483e193f\": container with ID starting with bac150f2c7d5a3c89ba3271bc0f776e9ab4026be9a4f39d8326c4639483e193f not found: ID does not exist" containerID="bac150f2c7d5a3c89ba3271bc0f776e9ab4026be9a4f39d8326c4639483e193f" Nov 24 18:10:32 crc kubenswrapper[4768]: I1124 18:10:32.824456 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac150f2c7d5a3c89ba3271bc0f776e9ab4026be9a4f39d8326c4639483e193f"} err="failed to get container status \"bac150f2c7d5a3c89ba3271bc0f776e9ab4026be9a4f39d8326c4639483e193f\": rpc error: code = NotFound desc = could not find container \"bac150f2c7d5a3c89ba3271bc0f776e9ab4026be9a4f39d8326c4639483e193f\": container with ID starting with bac150f2c7d5a3c89ba3271bc0f776e9ab4026be9a4f39d8326c4639483e193f not found: ID does not exist" Nov 24 18:10:33 crc kubenswrapper[4768]: I1124 18:10:33.907366 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" path="/var/lib/kubelet/pods/98c4e1aa-5468-45fe-8b4b-71af1ea6d19b/volumes" Nov 24 18:10:35 crc kubenswrapper[4768]: E1124 18:10:35.039564 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod827d75ee_d24d_4bef_b476_ca47dff711b6.slice\": RecentStats: unable to find data in memory cache]" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.550042 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg"] Nov 24 18:10:37 crc kubenswrapper[4768]: E1124 18:10:37.550781 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" containerName="dnsmasq-dns" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.550797 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" containerName="dnsmasq-dns" Nov 24 18:10:37 crc kubenswrapper[4768]: E1124 18:10:37.550831 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" containerName="init" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.550840 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" containerName="init" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.551033 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c4e1aa-5468-45fe-8b4b-71af1ea6d19b" containerName="dnsmasq-dns" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.551669 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.554204 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.557463 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.557821 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.561922 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.563199 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg"] Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.624258 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.624313 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.624429 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.624466 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5spx\" (UniqueName: \"kubernetes.io/projected/28f85e24-4898-4ff4-8fca-995a0a85ad6e-kube-api-access-h5spx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.726611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5spx\" (UniqueName: \"kubernetes.io/projected/28f85e24-4898-4ff4-8fca-995a0a85ad6e-kube-api-access-h5spx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.726776 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.726812 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.726914 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.732836 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.733236 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.733251 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.743025 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5spx\" (UniqueName: \"kubernetes.io/projected/28f85e24-4898-4ff4-8fca-995a0a85ad6e-kube-api-access-h5spx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:37 crc kubenswrapper[4768]: I1124 18:10:37.871849 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:10:38 crc kubenswrapper[4768]: I1124 18:10:38.378697 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg"] Nov 24 18:10:38 crc kubenswrapper[4768]: W1124 18:10:38.381423 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28f85e24_4898_4ff4_8fca_995a0a85ad6e.slice/crio-c39fe9149b2976b6c97359f0cb7093ef8fbc651c4cf78abdb2a15de1aaae806c WatchSource:0}: Error finding container c39fe9149b2976b6c97359f0cb7093ef8fbc651c4cf78abdb2a15de1aaae806c: Status 404 returned error can't find the container with id c39fe9149b2976b6c97359f0cb7093ef8fbc651c4cf78abdb2a15de1aaae806c Nov 24 18:10:38 crc kubenswrapper[4768]: I1124 18:10:38.383819 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 18:10:38 crc kubenswrapper[4768]: I1124 18:10:38.791041 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" event={"ID":"28f85e24-4898-4ff4-8fca-995a0a85ad6e","Type":"ContainerStarted","Data":"c39fe9149b2976b6c97359f0cb7093ef8fbc651c4cf78abdb2a15de1aaae806c"} Nov 24 18:10:45 crc kubenswrapper[4768]: E1124 18:10:45.269409 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod827d75ee_d24d_4bef_b476_ca47dff711b6.slice\": RecentStats: unable to find data in memory cache]" Nov 24 18:10:47 crc kubenswrapper[4768]: I1124 18:10:47.875077 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" event={"ID":"28f85e24-4898-4ff4-8fca-995a0a85ad6e","Type":"ContainerStarted","Data":"1ae642ea1a38d2cf0cc6d3050cf672a7f9e05473aa0526aa22844cab052d4588"} Nov 24 18:10:47 crc kubenswrapper[4768]: I1124 18:10:47.898674 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" podStartSLOduration=2.535232259 podStartE2EDuration="10.898655763s" podCreationTimestamp="2025-11-24 18:10:37 +0000 UTC" firstStartedPulling="2025-11-24 18:10:38.383593188 +0000 UTC m=+1277.244174965" lastFinishedPulling="2025-11-24 18:10:46.747016692 +0000 UTC m=+1285.607598469" observedRunningTime="2025-11-24 18:10:47.896429271 +0000 UTC m=+1286.757011048" watchObservedRunningTime="2025-11-24 18:10:47.898655763 +0000 UTC m=+1286.759237540" Nov 24 18:10:48 crc kubenswrapper[4768]: I1124 18:10:48.884742 4768 generic.go:334] "Generic (PLEG): container finished" podID="2d3ded99-92ff-43cc-83de-6042d6c83acf" containerID="3fc089219835810795365b2350eea34d1e58e22ad1e46ea293bfd53bdff6cebc" exitCode=0 Nov 24 18:10:48 crc kubenswrapper[4768]: I1124 18:10:48.884847 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2d3ded99-92ff-43cc-83de-6042d6c83acf","Type":"ContainerDied","Data":"3fc089219835810795365b2350eea34d1e58e22ad1e46ea293bfd53bdff6cebc"} Nov 24 18:10:49 crc kubenswrapper[4768]: I1124 18:10:49.894579 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2d3ded99-92ff-43cc-83de-6042d6c83acf","Type":"ContainerStarted","Data":"21d2007b4c76d2b8362d1a107e42467861e243fb2356f96ec2beaf50d598f98c"} Nov 24 18:10:49 crc kubenswrapper[4768]: I1124 18:10:49.895009 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 18:10:49 crc kubenswrapper[4768]: I1124 18:10:49.917943 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.917922028 podStartE2EDuration="41.917922028s" podCreationTimestamp="2025-11-24 18:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:10:49.913943716 +0000 UTC m=+1288.774525483" watchObservedRunningTime="2025-11-24 18:10:49.917922028 +0000 UTC m=+1288.778503815" Nov 24 18:10:52 crc kubenswrapper[4768]: I1124 18:10:52.923714 4768 generic.go:334] "Generic (PLEG): container finished" podID="f61bf1e8-52b3-4777-ad9b-52c8a1cad06c" containerID="ee56884daa16b1af854cfa3d2734cd92eef280084c49930342b37f35a3cab567" exitCode=0 Nov 24 18:10:52 crc kubenswrapper[4768]: I1124 18:10:52.923809 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c","Type":"ContainerDied","Data":"ee56884daa16b1af854cfa3d2734cd92eef280084c49930342b37f35a3cab567"} Nov 24 18:10:53 crc kubenswrapper[4768]: I1124 18:10:53.934415 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f61bf1e8-52b3-4777-ad9b-52c8a1cad06c","Type":"ContainerStarted","Data":"8682b99df5ca69d0d5b4d8e62947d9792dc64fc2ae6ecef151572bde5d464961"} Nov 24 18:10:53 crc kubenswrapper[4768]: I1124 18:10:53.934929 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:10:53 crc kubenswrapper[4768]: I1124 18:10:53.961961 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.96193874 podStartE2EDuration="36.96193874s" podCreationTimestamp="2025-11-24 18:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:10:53.955817159 +0000 UTC m=+1292.816398936" watchObservedRunningTime="2025-11-24 18:10:53.96193874 +0000 UTC m=+1292.822520517" Nov 24 18:10:55 crc kubenswrapper[4768]: E1124 18:10:55.510688 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod827d75ee_d24d_4bef_b476_ca47dff711b6.slice\": RecentStats: unable to find data in memory cache]" Nov 24 18:10:58 crc kubenswrapper[4768]: I1124 18:10:58.975835 4768 generic.go:334] "Generic (PLEG): container finished" podID="28f85e24-4898-4ff4-8fca-995a0a85ad6e" containerID="1ae642ea1a38d2cf0cc6d3050cf672a7f9e05473aa0526aa22844cab052d4588" exitCode=0 Nov 24 18:10:58 crc kubenswrapper[4768]: I1124 18:10:58.975929 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" event={"ID":"28f85e24-4898-4ff4-8fca-995a0a85ad6e","Type":"ContainerDied","Data":"1ae642ea1a38d2cf0cc6d3050cf672a7f9e05473aa0526aa22844cab052d4588"} Nov 24 18:10:59 crc kubenswrapper[4768]: I1124 18:10:59.150700 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.405301 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.532414 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-ssh-key\") pod \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.532552 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-repo-setup-combined-ca-bundle\") pod \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.532763 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-inventory\") pod \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.532827 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5spx\" (UniqueName: \"kubernetes.io/projected/28f85e24-4898-4ff4-8fca-995a0a85ad6e-kube-api-access-h5spx\") pod \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\" (UID: \"28f85e24-4898-4ff4-8fca-995a0a85ad6e\") " Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.538917 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "28f85e24-4898-4ff4-8fca-995a0a85ad6e" (UID: "28f85e24-4898-4ff4-8fca-995a0a85ad6e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.539152 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f85e24-4898-4ff4-8fca-995a0a85ad6e-kube-api-access-h5spx" (OuterVolumeSpecName: "kube-api-access-h5spx") pod "28f85e24-4898-4ff4-8fca-995a0a85ad6e" (UID: "28f85e24-4898-4ff4-8fca-995a0a85ad6e"). InnerVolumeSpecName "kube-api-access-h5spx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.563017 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-inventory" (OuterVolumeSpecName: "inventory") pod "28f85e24-4898-4ff4-8fca-995a0a85ad6e" (UID: "28f85e24-4898-4ff4-8fca-995a0a85ad6e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.564808 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "28f85e24-4898-4ff4-8fca-995a0a85ad6e" (UID: "28f85e24-4898-4ff4-8fca-995a0a85ad6e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.635935 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5spx\" (UniqueName: \"kubernetes.io/projected/28f85e24-4898-4ff4-8fca-995a0a85ad6e-kube-api-access-h5spx\") on node \"crc\" DevicePath \"\"" Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.635980 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.635996 4768 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.636009 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f85e24-4898-4ff4-8fca-995a0a85ad6e-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.995021 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" event={"ID":"28f85e24-4898-4ff4-8fca-995a0a85ad6e","Type":"ContainerDied","Data":"c39fe9149b2976b6c97359f0cb7093ef8fbc651c4cf78abdb2a15de1aaae806c"} Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.995328 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c39fe9149b2976b6c97359f0cb7093ef8fbc651c4cf78abdb2a15de1aaae806c" Nov 24 18:11:00 crc kubenswrapper[4768]: I1124 18:11:00.995119 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.057313 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c"] Nov 24 18:11:01 crc kubenswrapper[4768]: E1124 18:11:01.059347 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f85e24-4898-4ff4-8fca-995a0a85ad6e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.059396 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f85e24-4898-4ff4-8fca-995a0a85ad6e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.061444 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f85e24-4898-4ff4-8fca-995a0a85ad6e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.072220 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.073776 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c"] Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.075946 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.077457 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.078275 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.078289 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.144937 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.145018 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpsz9\" (UniqueName: \"kubernetes.io/projected/eea64d47-cdaf-4b62-906f-914aa42a9e60-kube-api-access-cpsz9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.145120 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.145141 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.246696 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.246808 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpsz9\" (UniqueName: \"kubernetes.io/projected/eea64d47-cdaf-4b62-906f-914aa42a9e60-kube-api-access-cpsz9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.246980 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.247022 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.251615 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.251786 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.251978 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.266565 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpsz9\" (UniqueName: \"kubernetes.io/projected/eea64d47-cdaf-4b62-906f-914aa42a9e60-kube-api-access-cpsz9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.391789 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:11:01 crc kubenswrapper[4768]: I1124 18:11:01.910267 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c"] Nov 24 18:11:01 crc kubenswrapper[4768]: W1124 18:11:01.913099 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeea64d47_cdaf_4b62_906f_914aa42a9e60.slice/crio-2d0bc7550455dc595b59423bd896587adfd341f1089dc9b847600bdf5b6cc343 WatchSource:0}: Error finding container 2d0bc7550455dc595b59423bd896587adfd341f1089dc9b847600bdf5b6cc343: Status 404 returned error can't find the container with id 2d0bc7550455dc595b59423bd896587adfd341f1089dc9b847600bdf5b6cc343 Nov 24 18:11:02 crc kubenswrapper[4768]: I1124 18:11:02.006094 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" event={"ID":"eea64d47-cdaf-4b62-906f-914aa42a9e60","Type":"ContainerStarted","Data":"2d0bc7550455dc595b59423bd896587adfd341f1089dc9b847600bdf5b6cc343"} Nov 24 18:11:03 crc kubenswrapper[4768]: I1124 18:11:03.017816 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" event={"ID":"eea64d47-cdaf-4b62-906f-914aa42a9e60","Type":"ContainerStarted","Data":"f0bd82ce6408f4d732f61e357e2f27322d7becc81c04d8b2bd9e65ff5d7f2c7a"} Nov 24 18:11:03 crc kubenswrapper[4768]: I1124 18:11:03.036581 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" podStartSLOduration=1.5983138239999999 podStartE2EDuration="2.036558585s" podCreationTimestamp="2025-11-24 18:11:01 +0000 UTC" firstStartedPulling="2025-11-24 18:11:01.927023279 +0000 UTC m=+1300.787605076" lastFinishedPulling="2025-11-24 18:11:02.36526806 +0000 UTC m=+1301.225849837" observedRunningTime="2025-11-24 18:11:03.031863314 +0000 UTC m=+1301.892445091" watchObservedRunningTime="2025-11-24 18:11:03.036558585 +0000 UTC m=+1301.897140362" Nov 24 18:11:05 crc kubenswrapper[4768]: E1124 18:11:05.750048 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod827d75ee_d24d_4bef_b476_ca47dff711b6.slice\": RecentStats: unable to find data in memory cache]" Nov 24 18:11:07 crc kubenswrapper[4768]: I1124 18:11:07.987865 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 18:11:13 crc kubenswrapper[4768]: I1124 18:11:13.656830 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:11:13 crc kubenswrapper[4768]: I1124 18:11:13.657440 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:11:15 crc kubenswrapper[4768]: E1124 18:11:15.977225 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod827d75ee_d24d_4bef_b476_ca47dff711b6.slice\": RecentStats: unable to find data in memory cache]" Nov 24 18:11:43 crc kubenswrapper[4768]: I1124 18:11:43.656399 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:11:43 crc kubenswrapper[4768]: I1124 18:11:43.657035 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:11:46 crc kubenswrapper[4768]: I1124 18:11:46.746435 4768 scope.go:117] "RemoveContainer" containerID="8396cb9c173bd3c83a97029fc446cbfbcff303606f3c4c0551d36cd572cb3622" Nov 24 18:11:46 crc kubenswrapper[4768]: I1124 18:11:46.767183 4768 scope.go:117] "RemoveContainer" containerID="392e371c78af459e6eb5e34f30524ffceda063d513891078f8d935948772fc79" Nov 24 18:11:46 crc kubenswrapper[4768]: I1124 18:11:46.820306 4768 scope.go:117] "RemoveContainer" containerID="9670aca0447e91bed48b8acb8636d7b8a53952ca3b86abc67ce05de9ccd1308c" Nov 24 18:11:46 crc kubenswrapper[4768]: I1124 18:11:46.844082 4768 scope.go:117] "RemoveContainer" containerID="bb4d433793cfc64e4bfe85689d9a80f51f90462a6efa405656c8be12f3d73cfa" Nov 24 18:12:13 crc kubenswrapper[4768]: I1124 18:12:13.656739 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:12:13 crc kubenswrapper[4768]: I1124 18:12:13.657443 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:12:13 crc kubenswrapper[4768]: I1124 18:12:13.657530 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 18:12:13 crc kubenswrapper[4768]: I1124 18:12:13.658452 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40df835a5ec9cfe7b392f2013854288a324716103ffb3a94522610c0a0ffe19d"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 18:12:13 crc kubenswrapper[4768]: I1124 18:12:13.658576 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://40df835a5ec9cfe7b392f2013854288a324716103ffb3a94522610c0a0ffe19d" gracePeriod=600 Nov 24 18:12:14 crc kubenswrapper[4768]: I1124 18:12:14.678572 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="40df835a5ec9cfe7b392f2013854288a324716103ffb3a94522610c0a0ffe19d" exitCode=0 Nov 24 18:12:14 crc kubenswrapper[4768]: I1124 18:12:14.678651 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"40df835a5ec9cfe7b392f2013854288a324716103ffb3a94522610c0a0ffe19d"} Nov 24 18:12:14 crc kubenswrapper[4768]: I1124 18:12:14.678895 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d"} Nov 24 18:12:14 crc kubenswrapper[4768]: I1124 18:12:14.678913 4768 scope.go:117] "RemoveContainer" containerID="99ccf8cd01116f9aed046232143cdd0d069d3d1d4cac3ec060c0e2b82cb26f4b" Nov 24 18:12:46 crc kubenswrapper[4768]: I1124 18:12:46.911850 4768 scope.go:117] "RemoveContainer" containerID="f04ebf8f996a285adf0cd06a8f46a7aa50b6ab6900e8cc8628dbb650c28d9869" Nov 24 18:12:46 crc kubenswrapper[4768]: I1124 18:12:46.980821 4768 scope.go:117] "RemoveContainer" containerID="d0cce081462ef1068ce1f43d1a38b3ba1170cd30a01d5c10b9e84d42ae4556ba" Nov 24 18:12:47 crc kubenswrapper[4768]: I1124 18:12:47.021381 4768 scope.go:117] "RemoveContainer" containerID="d7f8c2c98f1774e813d7e6e073329d9d4ce0bba97314115fa6a9ff2d61646888" Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.523001 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4nm9j"] Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.525832 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.541312 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nm9j"] Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.649125 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e645746c-2382-4877-bac5-bf5f4d510245-utilities\") pod \"community-operators-4nm9j\" (UID: \"e645746c-2382-4877-bac5-bf5f4d510245\") " pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.649436 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcnfg\" (UniqueName: \"kubernetes.io/projected/e645746c-2382-4877-bac5-bf5f4d510245-kube-api-access-jcnfg\") pod \"community-operators-4nm9j\" (UID: \"e645746c-2382-4877-bac5-bf5f4d510245\") " pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.649966 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e645746c-2382-4877-bac5-bf5f4d510245-catalog-content\") pod \"community-operators-4nm9j\" (UID: \"e645746c-2382-4877-bac5-bf5f4d510245\") " pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.751947 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e645746c-2382-4877-bac5-bf5f4d510245-utilities\") pod \"community-operators-4nm9j\" (UID: \"e645746c-2382-4877-bac5-bf5f4d510245\") " pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.752372 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcnfg\" (UniqueName: \"kubernetes.io/projected/e645746c-2382-4877-bac5-bf5f4d510245-kube-api-access-jcnfg\") pod \"community-operators-4nm9j\" (UID: \"e645746c-2382-4877-bac5-bf5f4d510245\") " pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.752432 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e645746c-2382-4877-bac5-bf5f4d510245-catalog-content\") pod \"community-operators-4nm9j\" (UID: \"e645746c-2382-4877-bac5-bf5f4d510245\") " pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.752611 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e645746c-2382-4877-bac5-bf5f4d510245-utilities\") pod \"community-operators-4nm9j\" (UID: \"e645746c-2382-4877-bac5-bf5f4d510245\") " pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.752872 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e645746c-2382-4877-bac5-bf5f4d510245-catalog-content\") pod \"community-operators-4nm9j\" (UID: \"e645746c-2382-4877-bac5-bf5f4d510245\") " pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.781500 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcnfg\" (UniqueName: \"kubernetes.io/projected/e645746c-2382-4877-bac5-bf5f4d510245-kube-api-access-jcnfg\") pod \"community-operators-4nm9j\" (UID: \"e645746c-2382-4877-bac5-bf5f4d510245\") " pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:12:50 crc kubenswrapper[4768]: I1124 18:12:50.849809 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:12:51 crc kubenswrapper[4768]: I1124 18:12:51.389931 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nm9j"] Nov 24 18:12:52 crc kubenswrapper[4768]: I1124 18:12:52.026082 4768 generic.go:334] "Generic (PLEG): container finished" podID="e645746c-2382-4877-bac5-bf5f4d510245" containerID="e92a081d761b4bc479be3aee2262fc61b05c0856803ebd01610e804a83085bb7" exitCode=0 Nov 24 18:12:52 crc kubenswrapper[4768]: I1124 18:12:52.026131 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nm9j" event={"ID":"e645746c-2382-4877-bac5-bf5f4d510245","Type":"ContainerDied","Data":"e92a081d761b4bc479be3aee2262fc61b05c0856803ebd01610e804a83085bb7"} Nov 24 18:12:52 crc kubenswrapper[4768]: I1124 18:12:52.034664 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nm9j" event={"ID":"e645746c-2382-4877-bac5-bf5f4d510245","Type":"ContainerStarted","Data":"a0ce0af6baed2c7c6a62aea6cfcd364191a3501152f02ede34cf8008f0fd291c"} Nov 24 18:12:55 crc kubenswrapper[4768]: I1124 18:12:55.064570 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nm9j" event={"ID":"e645746c-2382-4877-bac5-bf5f4d510245","Type":"ContainerStarted","Data":"43bc764cb47ed3a31bfa64ea00ca7a455dfac80445448f18f8a344fc8ec7dc26"} Nov 24 18:12:56 crc kubenswrapper[4768]: I1124 18:12:56.075297 4768 generic.go:334] "Generic (PLEG): container finished" podID="e645746c-2382-4877-bac5-bf5f4d510245" containerID="43bc764cb47ed3a31bfa64ea00ca7a455dfac80445448f18f8a344fc8ec7dc26" exitCode=0 Nov 24 18:12:56 crc kubenswrapper[4768]: I1124 18:12:56.075344 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nm9j" event={"ID":"e645746c-2382-4877-bac5-bf5f4d510245","Type":"ContainerDied","Data":"43bc764cb47ed3a31bfa64ea00ca7a455dfac80445448f18f8a344fc8ec7dc26"} Nov 24 18:12:57 crc kubenswrapper[4768]: I1124 18:12:57.083981 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nm9j" event={"ID":"e645746c-2382-4877-bac5-bf5f4d510245","Type":"ContainerStarted","Data":"abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55"} Nov 24 18:12:57 crc kubenswrapper[4768]: I1124 18:12:57.101332 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4nm9j" podStartSLOduration=2.320823099 podStartE2EDuration="7.101306333s" podCreationTimestamp="2025-11-24 18:12:50 +0000 UTC" firstStartedPulling="2025-11-24 18:12:52.028993036 +0000 UTC m=+1410.889574823" lastFinishedPulling="2025-11-24 18:12:56.80947628 +0000 UTC m=+1415.670058057" observedRunningTime="2025-11-24 18:12:57.097604694 +0000 UTC m=+1415.958186471" watchObservedRunningTime="2025-11-24 18:12:57.101306333 +0000 UTC m=+1415.961888110" Nov 24 18:13:00 crc kubenswrapper[4768]: I1124 18:13:00.851851 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:13:00 crc kubenswrapper[4768]: I1124 18:13:00.852682 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:13:00 crc kubenswrapper[4768]: I1124 18:13:00.896268 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:13:01 crc kubenswrapper[4768]: I1124 18:13:01.182915 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:13:01 crc kubenswrapper[4768]: I1124 18:13:01.227428 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nm9j"] Nov 24 18:13:03 crc kubenswrapper[4768]: I1124 18:13:03.143873 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4nm9j" podUID="e645746c-2382-4877-bac5-bf5f4d510245" containerName="registry-server" containerID="cri-o://abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55" gracePeriod=2 Nov 24 18:13:03 crc kubenswrapper[4768]: I1124 18:13:03.586246 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:13:03 crc kubenswrapper[4768]: I1124 18:13:03.727211 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcnfg\" (UniqueName: \"kubernetes.io/projected/e645746c-2382-4877-bac5-bf5f4d510245-kube-api-access-jcnfg\") pod \"e645746c-2382-4877-bac5-bf5f4d510245\" (UID: \"e645746c-2382-4877-bac5-bf5f4d510245\") " Nov 24 18:13:03 crc kubenswrapper[4768]: I1124 18:13:03.727548 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e645746c-2382-4877-bac5-bf5f4d510245-catalog-content\") pod \"e645746c-2382-4877-bac5-bf5f4d510245\" (UID: \"e645746c-2382-4877-bac5-bf5f4d510245\") " Nov 24 18:13:03 crc kubenswrapper[4768]: I1124 18:13:03.727599 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e645746c-2382-4877-bac5-bf5f4d510245-utilities\") pod \"e645746c-2382-4877-bac5-bf5f4d510245\" (UID: \"e645746c-2382-4877-bac5-bf5f4d510245\") " Nov 24 18:13:03 crc kubenswrapper[4768]: I1124 18:13:03.728699 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e645746c-2382-4877-bac5-bf5f4d510245-utilities" (OuterVolumeSpecName: "utilities") pod "e645746c-2382-4877-bac5-bf5f4d510245" (UID: "e645746c-2382-4877-bac5-bf5f4d510245"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:13:03 crc kubenswrapper[4768]: I1124 18:13:03.734662 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e645746c-2382-4877-bac5-bf5f4d510245-kube-api-access-jcnfg" (OuterVolumeSpecName: "kube-api-access-jcnfg") pod "e645746c-2382-4877-bac5-bf5f4d510245" (UID: "e645746c-2382-4877-bac5-bf5f4d510245"). InnerVolumeSpecName "kube-api-access-jcnfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:13:03 crc kubenswrapper[4768]: I1124 18:13:03.778466 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e645746c-2382-4877-bac5-bf5f4d510245-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e645746c-2382-4877-bac5-bf5f4d510245" (UID: "e645746c-2382-4877-bac5-bf5f4d510245"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:13:03 crc kubenswrapper[4768]: I1124 18:13:03.829056 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e645746c-2382-4877-bac5-bf5f4d510245-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:13:03 crc kubenswrapper[4768]: I1124 18:13:03.829095 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e645746c-2382-4877-bac5-bf5f4d510245-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:13:03 crc kubenswrapper[4768]: I1124 18:13:03.829105 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcnfg\" (UniqueName: \"kubernetes.io/projected/e645746c-2382-4877-bac5-bf5f4d510245-kube-api-access-jcnfg\") on node \"crc\" DevicePath \"\"" Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.155007 4768 generic.go:334] "Generic (PLEG): container finished" podID="e645746c-2382-4877-bac5-bf5f4d510245" containerID="abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55" exitCode=0 Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.155060 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nm9j" event={"ID":"e645746c-2382-4877-bac5-bf5f4d510245","Type":"ContainerDied","Data":"abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55"} Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.155094 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nm9j" event={"ID":"e645746c-2382-4877-bac5-bf5f4d510245","Type":"ContainerDied","Data":"a0ce0af6baed2c7c6a62aea6cfcd364191a3501152f02ede34cf8008f0fd291c"} Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.155119 4768 scope.go:117] "RemoveContainer" containerID="abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55" Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.155270 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nm9j" Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.181024 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nm9j"] Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.185661 4768 scope.go:117] "RemoveContainer" containerID="43bc764cb47ed3a31bfa64ea00ca7a455dfac80445448f18f8a344fc8ec7dc26" Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.190754 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4nm9j"] Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.217065 4768 scope.go:117] "RemoveContainer" containerID="e92a081d761b4bc479be3aee2262fc61b05c0856803ebd01610e804a83085bb7" Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.286796 4768 scope.go:117] "RemoveContainer" containerID="abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55" Nov 24 18:13:04 crc kubenswrapper[4768]: E1124 18:13:04.287255 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55\": container with ID starting with abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55 not found: ID does not exist" containerID="abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55" Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.287307 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55"} err="failed to get container status \"abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55\": rpc error: code = NotFound desc = could not find container \"abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55\": container with ID starting with abc4e9321ccf305ad61419127ddcfc0dc5e66d958a533478902df0389284ca55 not found: ID does not exist" Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.287334 4768 scope.go:117] "RemoveContainer" containerID="43bc764cb47ed3a31bfa64ea00ca7a455dfac80445448f18f8a344fc8ec7dc26" Nov 24 18:13:04 crc kubenswrapper[4768]: E1124 18:13:04.287759 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43bc764cb47ed3a31bfa64ea00ca7a455dfac80445448f18f8a344fc8ec7dc26\": container with ID starting with 43bc764cb47ed3a31bfa64ea00ca7a455dfac80445448f18f8a344fc8ec7dc26 not found: ID does not exist" containerID="43bc764cb47ed3a31bfa64ea00ca7a455dfac80445448f18f8a344fc8ec7dc26" Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.287816 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43bc764cb47ed3a31bfa64ea00ca7a455dfac80445448f18f8a344fc8ec7dc26"} err="failed to get container status \"43bc764cb47ed3a31bfa64ea00ca7a455dfac80445448f18f8a344fc8ec7dc26\": rpc error: code = NotFound desc = could not find container \"43bc764cb47ed3a31bfa64ea00ca7a455dfac80445448f18f8a344fc8ec7dc26\": container with ID starting with 43bc764cb47ed3a31bfa64ea00ca7a455dfac80445448f18f8a344fc8ec7dc26 not found: ID does not exist" Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.288071 4768 scope.go:117] "RemoveContainer" containerID="e92a081d761b4bc479be3aee2262fc61b05c0856803ebd01610e804a83085bb7" Nov 24 18:13:04 crc kubenswrapper[4768]: E1124 18:13:04.288370 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e92a081d761b4bc479be3aee2262fc61b05c0856803ebd01610e804a83085bb7\": container with ID starting with e92a081d761b4bc479be3aee2262fc61b05c0856803ebd01610e804a83085bb7 not found: ID does not exist" containerID="e92a081d761b4bc479be3aee2262fc61b05c0856803ebd01610e804a83085bb7" Nov 24 18:13:04 crc kubenswrapper[4768]: I1124 18:13:04.288406 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e92a081d761b4bc479be3aee2262fc61b05c0856803ebd01610e804a83085bb7"} err="failed to get container status \"e92a081d761b4bc479be3aee2262fc61b05c0856803ebd01610e804a83085bb7\": rpc error: code = NotFound desc = could not find container \"e92a081d761b4bc479be3aee2262fc61b05c0856803ebd01610e804a83085bb7\": container with ID starting with e92a081d761b4bc479be3aee2262fc61b05c0856803ebd01610e804a83085bb7 not found: ID does not exist" Nov 24 18:13:05 crc kubenswrapper[4768]: I1124 18:13:05.908560 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e645746c-2382-4877-bac5-bf5f4d510245" path="/var/lib/kubelet/pods/e645746c-2382-4877-bac5-bf5f4d510245/volumes" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.310780 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-66k9f"] Nov 24 18:13:43 crc kubenswrapper[4768]: E1124 18:13:43.311942 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e645746c-2382-4877-bac5-bf5f4d510245" containerName="extract-content" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.311957 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e645746c-2382-4877-bac5-bf5f4d510245" containerName="extract-content" Nov 24 18:13:43 crc kubenswrapper[4768]: E1124 18:13:43.311969 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e645746c-2382-4877-bac5-bf5f4d510245" containerName="registry-server" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.311975 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e645746c-2382-4877-bac5-bf5f4d510245" containerName="registry-server" Nov 24 18:13:43 crc kubenswrapper[4768]: E1124 18:13:43.311996 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e645746c-2382-4877-bac5-bf5f4d510245" containerName="extract-utilities" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.312002 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e645746c-2382-4877-bac5-bf5f4d510245" containerName="extract-utilities" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.312157 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e645746c-2382-4877-bac5-bf5f4d510245" containerName="registry-server" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.313604 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.328156 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66k9f"] Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.489218 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47be4346-349f-41a3-b5a3-2f976949b28d-utilities\") pod \"certified-operators-66k9f\" (UID: \"47be4346-349f-41a3-b5a3-2f976949b28d\") " pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.489598 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47be4346-349f-41a3-b5a3-2f976949b28d-catalog-content\") pod \"certified-operators-66k9f\" (UID: \"47be4346-349f-41a3-b5a3-2f976949b28d\") " pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.489754 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfdrw\" (UniqueName: \"kubernetes.io/projected/47be4346-349f-41a3-b5a3-2f976949b28d-kube-api-access-bfdrw\") pod \"certified-operators-66k9f\" (UID: \"47be4346-349f-41a3-b5a3-2f976949b28d\") " pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.591630 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfdrw\" (UniqueName: \"kubernetes.io/projected/47be4346-349f-41a3-b5a3-2f976949b28d-kube-api-access-bfdrw\") pod \"certified-operators-66k9f\" (UID: \"47be4346-349f-41a3-b5a3-2f976949b28d\") " pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.591777 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47be4346-349f-41a3-b5a3-2f976949b28d-utilities\") pod \"certified-operators-66k9f\" (UID: \"47be4346-349f-41a3-b5a3-2f976949b28d\") " pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.591804 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47be4346-349f-41a3-b5a3-2f976949b28d-catalog-content\") pod \"certified-operators-66k9f\" (UID: \"47be4346-349f-41a3-b5a3-2f976949b28d\") " pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.592269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47be4346-349f-41a3-b5a3-2f976949b28d-utilities\") pod \"certified-operators-66k9f\" (UID: \"47be4346-349f-41a3-b5a3-2f976949b28d\") " pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.592379 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47be4346-349f-41a3-b5a3-2f976949b28d-catalog-content\") pod \"certified-operators-66k9f\" (UID: \"47be4346-349f-41a3-b5a3-2f976949b28d\") " pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.619310 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfdrw\" (UniqueName: \"kubernetes.io/projected/47be4346-349f-41a3-b5a3-2f976949b28d-kube-api-access-bfdrw\") pod \"certified-operators-66k9f\" (UID: \"47be4346-349f-41a3-b5a3-2f976949b28d\") " pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:43 crc kubenswrapper[4768]: I1124 18:13:43.635198 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:44 crc kubenswrapper[4768]: I1124 18:13:44.142920 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66k9f"] Nov 24 18:13:44 crc kubenswrapper[4768]: W1124 18:13:44.149235 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47be4346_349f_41a3_b5a3_2f976949b28d.slice/crio-9b63ff898609b12be9dc58536bf7148ce66311cb6d126471eb42ab0d048cf326 WatchSource:0}: Error finding container 9b63ff898609b12be9dc58536bf7148ce66311cb6d126471eb42ab0d048cf326: Status 404 returned error can't find the container with id 9b63ff898609b12be9dc58536bf7148ce66311cb6d126471eb42ab0d048cf326 Nov 24 18:13:44 crc kubenswrapper[4768]: I1124 18:13:44.522970 4768 generic.go:334] "Generic (PLEG): container finished" podID="47be4346-349f-41a3-b5a3-2f976949b28d" containerID="b844a7cf84cc250458c7da12ee412a20be71c7176c93a4a653ff75b118372748" exitCode=0 Nov 24 18:13:44 crc kubenswrapper[4768]: I1124 18:13:44.523058 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66k9f" event={"ID":"47be4346-349f-41a3-b5a3-2f976949b28d","Type":"ContainerDied","Data":"b844a7cf84cc250458c7da12ee412a20be71c7176c93a4a653ff75b118372748"} Nov 24 18:13:44 crc kubenswrapper[4768]: I1124 18:13:44.523200 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66k9f" event={"ID":"47be4346-349f-41a3-b5a3-2f976949b28d","Type":"ContainerStarted","Data":"9b63ff898609b12be9dc58536bf7148ce66311cb6d126471eb42ab0d048cf326"} Nov 24 18:13:46 crc kubenswrapper[4768]: I1124 18:13:46.542160 4768 generic.go:334] "Generic (PLEG): container finished" podID="47be4346-349f-41a3-b5a3-2f976949b28d" containerID="269df8c090ca696c2a0324736fdb42fb1bd0ccd9b0fd870338de3dd0368ce346" exitCode=0 Nov 24 18:13:46 crc kubenswrapper[4768]: I1124 18:13:46.542254 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66k9f" event={"ID":"47be4346-349f-41a3-b5a3-2f976949b28d","Type":"ContainerDied","Data":"269df8c090ca696c2a0324736fdb42fb1bd0ccd9b0fd870338de3dd0368ce346"} Nov 24 18:13:47 crc kubenswrapper[4768]: I1124 18:13:47.555005 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66k9f" event={"ID":"47be4346-349f-41a3-b5a3-2f976949b28d","Type":"ContainerStarted","Data":"b49f27ab7e9b44cbb8b20ccaffa06468d65a7b9122e3a3094dd938b692310fbf"} Nov 24 18:13:47 crc kubenswrapper[4768]: I1124 18:13:47.580702 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-66k9f" podStartSLOduration=1.840065084 podStartE2EDuration="4.580681587s" podCreationTimestamp="2025-11-24 18:13:43 +0000 UTC" firstStartedPulling="2025-11-24 18:13:44.524812992 +0000 UTC m=+1463.385394779" lastFinishedPulling="2025-11-24 18:13:47.265429505 +0000 UTC m=+1466.126011282" observedRunningTime="2025-11-24 18:13:47.573309877 +0000 UTC m=+1466.433891664" watchObservedRunningTime="2025-11-24 18:13:47.580681587 +0000 UTC m=+1466.441263364" Nov 24 18:13:53 crc kubenswrapper[4768]: I1124 18:13:53.635363 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:53 crc kubenswrapper[4768]: I1124 18:13:53.635973 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:53 crc kubenswrapper[4768]: I1124 18:13:53.684230 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:54 crc kubenswrapper[4768]: I1124 18:13:54.654036 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:54 crc kubenswrapper[4768]: I1124 18:13:54.695708 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-66k9f"] Nov 24 18:13:56 crc kubenswrapper[4768]: I1124 18:13:56.626341 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-66k9f" podUID="47be4346-349f-41a3-b5a3-2f976949b28d" containerName="registry-server" containerID="cri-o://b49f27ab7e9b44cbb8b20ccaffa06468d65a7b9122e3a3094dd938b692310fbf" gracePeriod=2 Nov 24 18:13:57 crc kubenswrapper[4768]: I1124 18:13:57.647901 4768 generic.go:334] "Generic (PLEG): container finished" podID="47be4346-349f-41a3-b5a3-2f976949b28d" containerID="b49f27ab7e9b44cbb8b20ccaffa06468d65a7b9122e3a3094dd938b692310fbf" exitCode=0 Nov 24 18:13:57 crc kubenswrapper[4768]: I1124 18:13:57.647973 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66k9f" event={"ID":"47be4346-349f-41a3-b5a3-2f976949b28d","Type":"ContainerDied","Data":"b49f27ab7e9b44cbb8b20ccaffa06468d65a7b9122e3a3094dd938b692310fbf"} Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.197644 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.260019 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47be4346-349f-41a3-b5a3-2f976949b28d-catalog-content\") pod \"47be4346-349f-41a3-b5a3-2f976949b28d\" (UID: \"47be4346-349f-41a3-b5a3-2f976949b28d\") " Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.260355 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47be4346-349f-41a3-b5a3-2f976949b28d-utilities\") pod \"47be4346-349f-41a3-b5a3-2f976949b28d\" (UID: \"47be4346-349f-41a3-b5a3-2f976949b28d\") " Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.260452 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfdrw\" (UniqueName: \"kubernetes.io/projected/47be4346-349f-41a3-b5a3-2f976949b28d-kube-api-access-bfdrw\") pod \"47be4346-349f-41a3-b5a3-2f976949b28d\" (UID: \"47be4346-349f-41a3-b5a3-2f976949b28d\") " Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.261322 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47be4346-349f-41a3-b5a3-2f976949b28d-utilities" (OuterVolumeSpecName: "utilities") pod "47be4346-349f-41a3-b5a3-2f976949b28d" (UID: "47be4346-349f-41a3-b5a3-2f976949b28d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.271121 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47be4346-349f-41a3-b5a3-2f976949b28d-kube-api-access-bfdrw" (OuterVolumeSpecName: "kube-api-access-bfdrw") pod "47be4346-349f-41a3-b5a3-2f976949b28d" (UID: "47be4346-349f-41a3-b5a3-2f976949b28d"). InnerVolumeSpecName "kube-api-access-bfdrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.309337 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47be4346-349f-41a3-b5a3-2f976949b28d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47be4346-349f-41a3-b5a3-2f976949b28d" (UID: "47be4346-349f-41a3-b5a3-2f976949b28d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.363128 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47be4346-349f-41a3-b5a3-2f976949b28d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.363167 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47be4346-349f-41a3-b5a3-2f976949b28d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.363183 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfdrw\" (UniqueName: \"kubernetes.io/projected/47be4346-349f-41a3-b5a3-2f976949b28d-kube-api-access-bfdrw\") on node \"crc\" DevicePath \"\"" Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.658881 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66k9f" event={"ID":"47be4346-349f-41a3-b5a3-2f976949b28d","Type":"ContainerDied","Data":"9b63ff898609b12be9dc58536bf7148ce66311cb6d126471eb42ab0d048cf326"} Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.658937 4768 scope.go:117] "RemoveContainer" containerID="b49f27ab7e9b44cbb8b20ccaffa06468d65a7b9122e3a3094dd938b692310fbf" Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.658961 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66k9f" Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.677743 4768 scope.go:117] "RemoveContainer" containerID="269df8c090ca696c2a0324736fdb42fb1bd0ccd9b0fd870338de3dd0368ce346" Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.697458 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-66k9f"] Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.706465 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-66k9f"] Nov 24 18:13:58 crc kubenswrapper[4768]: I1124 18:13:58.718015 4768 scope.go:117] "RemoveContainer" containerID="b844a7cf84cc250458c7da12ee412a20be71c7176c93a4a653ff75b118372748" Nov 24 18:13:59 crc kubenswrapper[4768]: I1124 18:13:59.911210 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47be4346-349f-41a3-b5a3-2f976949b28d" path="/var/lib/kubelet/pods/47be4346-349f-41a3-b5a3-2f976949b28d/volumes" Nov 24 18:14:05 crc kubenswrapper[4768]: I1124 18:14:05.728862 4768 generic.go:334] "Generic (PLEG): container finished" podID="eea64d47-cdaf-4b62-906f-914aa42a9e60" containerID="f0bd82ce6408f4d732f61e357e2f27322d7becc81c04d8b2bd9e65ff5d7f2c7a" exitCode=0 Nov 24 18:14:05 crc kubenswrapper[4768]: I1124 18:14:05.728941 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" event={"ID":"eea64d47-cdaf-4b62-906f-914aa42a9e60","Type":"ContainerDied","Data":"f0bd82ce6408f4d732f61e357e2f27322d7becc81c04d8b2bd9e65ff5d7f2c7a"} Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.133184 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.227580 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-inventory\") pod \"eea64d47-cdaf-4b62-906f-914aa42a9e60\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.227832 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpsz9\" (UniqueName: \"kubernetes.io/projected/eea64d47-cdaf-4b62-906f-914aa42a9e60-kube-api-access-cpsz9\") pod \"eea64d47-cdaf-4b62-906f-914aa42a9e60\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.227883 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-bootstrap-combined-ca-bundle\") pod \"eea64d47-cdaf-4b62-906f-914aa42a9e60\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.227953 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-ssh-key\") pod \"eea64d47-cdaf-4b62-906f-914aa42a9e60\" (UID: \"eea64d47-cdaf-4b62-906f-914aa42a9e60\") " Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.232904 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "eea64d47-cdaf-4b62-906f-914aa42a9e60" (UID: "eea64d47-cdaf-4b62-906f-914aa42a9e60"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.233269 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea64d47-cdaf-4b62-906f-914aa42a9e60-kube-api-access-cpsz9" (OuterVolumeSpecName: "kube-api-access-cpsz9") pod "eea64d47-cdaf-4b62-906f-914aa42a9e60" (UID: "eea64d47-cdaf-4b62-906f-914aa42a9e60"). InnerVolumeSpecName "kube-api-access-cpsz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.252878 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-inventory" (OuterVolumeSpecName: "inventory") pod "eea64d47-cdaf-4b62-906f-914aa42a9e60" (UID: "eea64d47-cdaf-4b62-906f-914aa42a9e60"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.255399 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eea64d47-cdaf-4b62-906f-914aa42a9e60" (UID: "eea64d47-cdaf-4b62-906f-914aa42a9e60"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.330476 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.330527 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.330541 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpsz9\" (UniqueName: \"kubernetes.io/projected/eea64d47-cdaf-4b62-906f-914aa42a9e60-kube-api-access-cpsz9\") on node \"crc\" DevicePath \"\"" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.330552 4768 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea64d47-cdaf-4b62-906f-914aa42a9e60-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.746643 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" event={"ID":"eea64d47-cdaf-4b62-906f-914aa42a9e60","Type":"ContainerDied","Data":"2d0bc7550455dc595b59423bd896587adfd341f1089dc9b847600bdf5b6cc343"} Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.746969 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d0bc7550455dc595b59423bd896587adfd341f1089dc9b847600bdf5b6cc343" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.746689 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.826336 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v"] Nov 24 18:14:07 crc kubenswrapper[4768]: E1124 18:14:07.826696 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47be4346-349f-41a3-b5a3-2f976949b28d" containerName="extract-utilities" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.826714 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="47be4346-349f-41a3-b5a3-2f976949b28d" containerName="extract-utilities" Nov 24 18:14:07 crc kubenswrapper[4768]: E1124 18:14:07.826741 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea64d47-cdaf-4b62-906f-914aa42a9e60" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.826749 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea64d47-cdaf-4b62-906f-914aa42a9e60" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:14:07 crc kubenswrapper[4768]: E1124 18:14:07.826762 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47be4346-349f-41a3-b5a3-2f976949b28d" containerName="extract-content" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.826769 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="47be4346-349f-41a3-b5a3-2f976949b28d" containerName="extract-content" Nov 24 18:14:07 crc kubenswrapper[4768]: E1124 18:14:07.826780 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47be4346-349f-41a3-b5a3-2f976949b28d" containerName="registry-server" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.826786 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="47be4346-349f-41a3-b5a3-2f976949b28d" containerName="registry-server" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.826968 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea64d47-cdaf-4b62-906f-914aa42a9e60" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.826984 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="47be4346-349f-41a3-b5a3-2f976949b28d" containerName="registry-server" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.827574 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.832082 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.832216 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.832262 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.832654 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.836782 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v"] Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.942987 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v\" (UID: \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.944481 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v\" (UID: \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:14:07 crc kubenswrapper[4768]: I1124 18:14:07.944565 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9rh2\" (UniqueName: \"kubernetes.io/projected/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-kube-api-access-v9rh2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v\" (UID: \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:14:08 crc kubenswrapper[4768]: I1124 18:14:08.046825 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v\" (UID: \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:14:08 crc kubenswrapper[4768]: I1124 18:14:08.046878 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9rh2\" (UniqueName: \"kubernetes.io/projected/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-kube-api-access-v9rh2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v\" (UID: \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:14:08 crc kubenswrapper[4768]: I1124 18:14:08.046925 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v\" (UID: \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:14:08 crc kubenswrapper[4768]: I1124 18:14:08.052404 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v\" (UID: \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:14:08 crc kubenswrapper[4768]: I1124 18:14:08.053083 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v\" (UID: \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:14:08 crc kubenswrapper[4768]: I1124 18:14:08.066750 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9rh2\" (UniqueName: \"kubernetes.io/projected/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-kube-api-access-v9rh2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v\" (UID: \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:14:08 crc kubenswrapper[4768]: I1124 18:14:08.144623 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:14:08 crc kubenswrapper[4768]: I1124 18:14:08.647900 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v"] Nov 24 18:14:08 crc kubenswrapper[4768]: I1124 18:14:08.755696 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" event={"ID":"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e","Type":"ContainerStarted","Data":"e974bca2a42f5aba7383cf58c827fc4b478dbb980516011be0ff5d3aaa223d23"} Nov 24 18:14:11 crc kubenswrapper[4768]: I1124 18:14:11.796710 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" event={"ID":"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e","Type":"ContainerStarted","Data":"aa1407d98acb35b29a51aefb0376dcac03265d8d36005b00520241f977e4b3ff"} Nov 24 18:14:11 crc kubenswrapper[4768]: I1124 18:14:11.815679 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" podStartSLOduration=2.228861651 podStartE2EDuration="4.815654309s" podCreationTimestamp="2025-11-24 18:14:07 +0000 UTC" firstStartedPulling="2025-11-24 18:14:08.657685446 +0000 UTC m=+1487.518267223" lastFinishedPulling="2025-11-24 18:14:11.244478084 +0000 UTC m=+1490.105059881" observedRunningTime="2025-11-24 18:14:11.810824921 +0000 UTC m=+1490.671406708" watchObservedRunningTime="2025-11-24 18:14:11.815654309 +0000 UTC m=+1490.676236086" Nov 24 18:14:13 crc kubenswrapper[4768]: I1124 18:14:13.656586 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:14:13 crc kubenswrapper[4768]: I1124 18:14:13.656939 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:14:43 crc kubenswrapper[4768]: I1124 18:14:43.657134 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:14:43 crc kubenswrapper[4768]: I1124 18:14:43.657774 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.182507 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-87bxg"] Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.184840 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.191996 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87bxg"] Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.280377 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8db46565-c403-4103-8399-23942d4198b9-utilities\") pod \"redhat-operators-87bxg\" (UID: \"8db46565-c403-4103-8399-23942d4198b9\") " pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.280457 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8db46565-c403-4103-8399-23942d4198b9-catalog-content\") pod \"redhat-operators-87bxg\" (UID: \"8db46565-c403-4103-8399-23942d4198b9\") " pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.280736 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9tgr\" (UniqueName: \"kubernetes.io/projected/8db46565-c403-4103-8399-23942d4198b9-kube-api-access-r9tgr\") pod \"redhat-operators-87bxg\" (UID: \"8db46565-c403-4103-8399-23942d4198b9\") " pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.381938 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8db46565-c403-4103-8399-23942d4198b9-catalog-content\") pod \"redhat-operators-87bxg\" (UID: \"8db46565-c403-4103-8399-23942d4198b9\") " pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.382358 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9tgr\" (UniqueName: \"kubernetes.io/projected/8db46565-c403-4103-8399-23942d4198b9-kube-api-access-r9tgr\") pod \"redhat-operators-87bxg\" (UID: \"8db46565-c403-4103-8399-23942d4198b9\") " pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.382404 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8db46565-c403-4103-8399-23942d4198b9-catalog-content\") pod \"redhat-operators-87bxg\" (UID: \"8db46565-c403-4103-8399-23942d4198b9\") " pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.382557 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8db46565-c403-4103-8399-23942d4198b9-utilities\") pod \"redhat-operators-87bxg\" (UID: \"8db46565-c403-4103-8399-23942d4198b9\") " pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.383151 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8db46565-c403-4103-8399-23942d4198b9-utilities\") pod \"redhat-operators-87bxg\" (UID: \"8db46565-c403-4103-8399-23942d4198b9\") " pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.401785 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9tgr\" (UniqueName: \"kubernetes.io/projected/8db46565-c403-4103-8399-23942d4198b9-kube-api-access-r9tgr\") pod \"redhat-operators-87bxg\" (UID: \"8db46565-c403-4103-8399-23942d4198b9\") " pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.507581 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:14:48 crc kubenswrapper[4768]: I1124 18:14:48.995904 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87bxg"] Nov 24 18:14:49 crc kubenswrapper[4768]: I1124 18:14:49.143855 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87bxg" event={"ID":"8db46565-c403-4103-8399-23942d4198b9","Type":"ContainerStarted","Data":"4171b2acae055d8dd9a4fdc074ceec54027df503f5bf9253cbd8143d25fb38b0"} Nov 24 18:14:50 crc kubenswrapper[4768]: I1124 18:14:50.157623 4768 generic.go:334] "Generic (PLEG): container finished" podID="8db46565-c403-4103-8399-23942d4198b9" containerID="a4e376249aedb473186e3687d52ecc105abb2db455cbcea8b083c7dc59b14910" exitCode=0 Nov 24 18:14:50 crc kubenswrapper[4768]: I1124 18:14:50.157729 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87bxg" event={"ID":"8db46565-c403-4103-8399-23942d4198b9","Type":"ContainerDied","Data":"a4e376249aedb473186e3687d52ecc105abb2db455cbcea8b083c7dc59b14910"} Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.158221 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml"] Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.160998 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.163044 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.163044 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.171143 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml"] Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.237416 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwslz\" (UniqueName: \"kubernetes.io/projected/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-kube-api-access-wwslz\") pod \"collect-profiles-29400135-mb4ml\" (UID: \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.237630 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-config-volume\") pod \"collect-profiles-29400135-mb4ml\" (UID: \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.238003 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-secret-volume\") pod \"collect-profiles-29400135-mb4ml\" (UID: \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.340165 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-secret-volume\") pod \"collect-profiles-29400135-mb4ml\" (UID: \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.340265 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwslz\" (UniqueName: \"kubernetes.io/projected/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-kube-api-access-wwslz\") pod \"collect-profiles-29400135-mb4ml\" (UID: \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.340373 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-config-volume\") pod \"collect-profiles-29400135-mb4ml\" (UID: \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.341568 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-config-volume\") pod \"collect-profiles-29400135-mb4ml\" (UID: \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.347940 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-secret-volume\") pod \"collect-profiles-29400135-mb4ml\" (UID: \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.362653 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwslz\" (UniqueName: \"kubernetes.io/projected/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-kube-api-access-wwslz\") pod \"collect-profiles-29400135-mb4ml\" (UID: \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.495165 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:00 crc kubenswrapper[4768]: I1124 18:15:00.965932 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml"] Nov 24 18:15:01 crc kubenswrapper[4768]: I1124 18:15:01.260960 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" event={"ID":"3d0753ff-e850-4c66-9e08-c71fe7a86f1d","Type":"ContainerStarted","Data":"4723becf18262a5f152b87f4610174aa6d51611a2522d50024f7ebec0b0b2745"} Nov 24 18:15:02 crc kubenswrapper[4768]: I1124 18:15:02.274682 4768 generic.go:334] "Generic (PLEG): container finished" podID="3d0753ff-e850-4c66-9e08-c71fe7a86f1d" containerID="4a951ae6d3f9f94bb72a0b96bdfd175f6450c2a81fc1bc5cf49313457506bfcf" exitCode=0 Nov 24 18:15:02 crc kubenswrapper[4768]: I1124 18:15:02.274857 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" event={"ID":"3d0753ff-e850-4c66-9e08-c71fe7a86f1d","Type":"ContainerDied","Data":"4a951ae6d3f9f94bb72a0b96bdfd175f6450c2a81fc1bc5cf49313457506bfcf"} Nov 24 18:15:03 crc kubenswrapper[4768]: I1124 18:15:03.287627 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87bxg" event={"ID":"8db46565-c403-4103-8399-23942d4198b9","Type":"ContainerStarted","Data":"22d52be2bcb264a1f9b1d53627a1dbd5d98654bf3736b01a755fd1516fe157be"} Nov 24 18:15:03 crc kubenswrapper[4768]: I1124 18:15:03.588326 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:03 crc kubenswrapper[4768]: I1124 18:15:03.699332 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwslz\" (UniqueName: \"kubernetes.io/projected/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-kube-api-access-wwslz\") pod \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\" (UID: \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\") " Nov 24 18:15:03 crc kubenswrapper[4768]: I1124 18:15:03.699497 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-config-volume\") pod \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\" (UID: \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\") " Nov 24 18:15:03 crc kubenswrapper[4768]: I1124 18:15:03.699715 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-secret-volume\") pod \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\" (UID: \"3d0753ff-e850-4c66-9e08-c71fe7a86f1d\") " Nov 24 18:15:03 crc kubenswrapper[4768]: I1124 18:15:03.700137 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-config-volume" (OuterVolumeSpecName: "config-volume") pod "3d0753ff-e850-4c66-9e08-c71fe7a86f1d" (UID: "3d0753ff-e850-4c66-9e08-c71fe7a86f1d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:15:03 crc kubenswrapper[4768]: I1124 18:15:03.700751 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 18:15:03 crc kubenswrapper[4768]: I1124 18:15:03.706207 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3d0753ff-e850-4c66-9e08-c71fe7a86f1d" (UID: "3d0753ff-e850-4c66-9e08-c71fe7a86f1d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:15:03 crc kubenswrapper[4768]: I1124 18:15:03.706370 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-kube-api-access-wwslz" (OuterVolumeSpecName: "kube-api-access-wwslz") pod "3d0753ff-e850-4c66-9e08-c71fe7a86f1d" (UID: "3d0753ff-e850-4c66-9e08-c71fe7a86f1d"). InnerVolumeSpecName "kube-api-access-wwslz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:15:03 crc kubenswrapper[4768]: I1124 18:15:03.802954 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwslz\" (UniqueName: \"kubernetes.io/projected/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-kube-api-access-wwslz\") on node \"crc\" DevicePath \"\"" Nov 24 18:15:03 crc kubenswrapper[4768]: I1124 18:15:03.803023 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d0753ff-e850-4c66-9e08-c71fe7a86f1d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 18:15:04 crc kubenswrapper[4768]: I1124 18:15:04.296793 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" Nov 24 18:15:04 crc kubenswrapper[4768]: I1124 18:15:04.296775 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml" event={"ID":"3d0753ff-e850-4c66-9e08-c71fe7a86f1d","Type":"ContainerDied","Data":"4723becf18262a5f152b87f4610174aa6d51611a2522d50024f7ebec0b0b2745"} Nov 24 18:15:04 crc kubenswrapper[4768]: I1124 18:15:04.297106 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4723becf18262a5f152b87f4610174aa6d51611a2522d50024f7ebec0b0b2745" Nov 24 18:15:04 crc kubenswrapper[4768]: I1124 18:15:04.298906 4768 generic.go:334] "Generic (PLEG): container finished" podID="8db46565-c403-4103-8399-23942d4198b9" containerID="22d52be2bcb264a1f9b1d53627a1dbd5d98654bf3736b01a755fd1516fe157be" exitCode=0 Nov 24 18:15:04 crc kubenswrapper[4768]: I1124 18:15:04.298948 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87bxg" event={"ID":"8db46565-c403-4103-8399-23942d4198b9","Type":"ContainerDied","Data":"22d52be2bcb264a1f9b1d53627a1dbd5d98654bf3736b01a755fd1516fe157be"} Nov 24 18:15:05 crc kubenswrapper[4768]: I1124 18:15:05.314025 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87bxg" event={"ID":"8db46565-c403-4103-8399-23942d4198b9","Type":"ContainerStarted","Data":"5710a3bc6be9627b4d079a3ff3e604d624f761bdf489053d5b8919af47c79211"} Nov 24 18:15:05 crc kubenswrapper[4768]: I1124 18:15:05.339944 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-87bxg" podStartSLOduration=2.786748069 podStartE2EDuration="17.339926896s" podCreationTimestamp="2025-11-24 18:14:48 +0000 UTC" firstStartedPulling="2025-11-24 18:14:50.160534131 +0000 UTC m=+1529.021115898" lastFinishedPulling="2025-11-24 18:15:04.713712948 +0000 UTC m=+1543.574294725" observedRunningTime="2025-11-24 18:15:05.334632384 +0000 UTC m=+1544.195214181" watchObservedRunningTime="2025-11-24 18:15:05.339926896 +0000 UTC m=+1544.200508673" Nov 24 18:15:08 crc kubenswrapper[4768]: I1124 18:15:08.507948 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:15:08 crc kubenswrapper[4768]: I1124 18:15:08.508253 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:15:09 crc kubenswrapper[4768]: I1124 18:15:09.554480 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-87bxg" podUID="8db46565-c403-4103-8399-23942d4198b9" containerName="registry-server" probeResult="failure" output=< Nov 24 18:15:09 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Nov 24 18:15:09 crc kubenswrapper[4768]: > Nov 24 18:15:13 crc kubenswrapper[4768]: I1124 18:15:13.656647 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:15:13 crc kubenswrapper[4768]: I1124 18:15:13.658161 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:15:13 crc kubenswrapper[4768]: I1124 18:15:13.658417 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 18:15:13 crc kubenswrapper[4768]: I1124 18:15:13.659533 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 18:15:13 crc kubenswrapper[4768]: I1124 18:15:13.659590 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" gracePeriod=600 Nov 24 18:15:13 crc kubenswrapper[4768]: E1124 18:15:13.805764 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:15:14 crc kubenswrapper[4768]: I1124 18:15:14.395330 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" exitCode=0 Nov 24 18:15:14 crc kubenswrapper[4768]: I1124 18:15:14.395380 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d"} Nov 24 18:15:14 crc kubenswrapper[4768]: I1124 18:15:14.395422 4768 scope.go:117] "RemoveContainer" containerID="40df835a5ec9cfe7b392f2013854288a324716103ffb3a94522610c0a0ffe19d" Nov 24 18:15:14 crc kubenswrapper[4768]: I1124 18:15:14.395942 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:15:14 crc kubenswrapper[4768]: E1124 18:15:14.397195 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:15:18 crc kubenswrapper[4768]: I1124 18:15:18.439193 4768 generic.go:334] "Generic (PLEG): container finished" podID="991cdac9-8e35-4e4d-bba0-f1aa5cb5981e" containerID="aa1407d98acb35b29a51aefb0376dcac03265d8d36005b00520241f977e4b3ff" exitCode=0 Nov 24 18:15:18 crc kubenswrapper[4768]: I1124 18:15:18.439775 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" event={"ID":"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e","Type":"ContainerDied","Data":"aa1407d98acb35b29a51aefb0376dcac03265d8d36005b00520241f977e4b3ff"} Nov 24 18:15:18 crc kubenswrapper[4768]: I1124 18:15:18.556417 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:15:18 crc kubenswrapper[4768]: I1124 18:15:18.610089 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-87bxg" Nov 24 18:15:19 crc kubenswrapper[4768]: I1124 18:15:19.219601 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87bxg"] Nov 24 18:15:19 crc kubenswrapper[4768]: I1124 18:15:19.384835 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cd76t"] Nov 24 18:15:19 crc kubenswrapper[4768]: I1124 18:15:19.385081 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cd76t" podUID="8ebedabf-6ef4-463c-98d0-d2afea402f61" containerName="registry-server" containerID="cri-o://1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443" gracePeriod=2 Nov 24 18:15:19 crc kubenswrapper[4768]: E1124 18:15:19.589258 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443 is running failed: container process not found" containerID="1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 18:15:19 crc kubenswrapper[4768]: E1124 18:15:19.589599 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443 is running failed: container process not found" containerID="1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 18:15:19 crc kubenswrapper[4768]: E1124 18:15:19.589883 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443 is running failed: container process not found" containerID="1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 18:15:19 crc kubenswrapper[4768]: E1124 18:15:19.589913 4768 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-cd76t" podUID="8ebedabf-6ef4-463c-98d0-d2afea402f61" containerName="registry-server" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.019875 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.026791 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.103088 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ebedabf-6ef4-463c-98d0-d2afea402f61-utilities\") pod \"8ebedabf-6ef4-463c-98d0-d2afea402f61\" (UID: \"8ebedabf-6ef4-463c-98d0-d2afea402f61\") " Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.103253 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9rh2\" (UniqueName: \"kubernetes.io/projected/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-kube-api-access-v9rh2\") pod \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\" (UID: \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\") " Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.103511 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ebedabf-6ef4-463c-98d0-d2afea402f61-catalog-content\") pod \"8ebedabf-6ef4-463c-98d0-d2afea402f61\" (UID: \"8ebedabf-6ef4-463c-98d0-d2afea402f61\") " Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.103563 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-inventory\") pod \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\" (UID: \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\") " Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.103590 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv7br\" (UniqueName: \"kubernetes.io/projected/8ebedabf-6ef4-463c-98d0-d2afea402f61-kube-api-access-pv7br\") pod \"8ebedabf-6ef4-463c-98d0-d2afea402f61\" (UID: \"8ebedabf-6ef4-463c-98d0-d2afea402f61\") " Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.103626 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-ssh-key\") pod \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\" (UID: \"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e\") " Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.105169 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ebedabf-6ef4-463c-98d0-d2afea402f61-utilities" (OuterVolumeSpecName: "utilities") pod "8ebedabf-6ef4-463c-98d0-d2afea402f61" (UID: "8ebedabf-6ef4-463c-98d0-d2afea402f61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.124113 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ebedabf-6ef4-463c-98d0-d2afea402f61-kube-api-access-pv7br" (OuterVolumeSpecName: "kube-api-access-pv7br") pod "8ebedabf-6ef4-463c-98d0-d2afea402f61" (UID: "8ebedabf-6ef4-463c-98d0-d2afea402f61"). InnerVolumeSpecName "kube-api-access-pv7br". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.124189 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-kube-api-access-v9rh2" (OuterVolumeSpecName: "kube-api-access-v9rh2") pod "991cdac9-8e35-4e4d-bba0-f1aa5cb5981e" (UID: "991cdac9-8e35-4e4d-bba0-f1aa5cb5981e"). InnerVolumeSpecName "kube-api-access-v9rh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.146825 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-inventory" (OuterVolumeSpecName: "inventory") pod "991cdac9-8e35-4e4d-bba0-f1aa5cb5981e" (UID: "991cdac9-8e35-4e4d-bba0-f1aa5cb5981e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.153299 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "991cdac9-8e35-4e4d-bba0-f1aa5cb5981e" (UID: "991cdac9-8e35-4e4d-bba0-f1aa5cb5981e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.206224 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.206270 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv7br\" (UniqueName: \"kubernetes.io/projected/8ebedabf-6ef4-463c-98d0-d2afea402f61-kube-api-access-pv7br\") on node \"crc\" DevicePath \"\"" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.206287 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.206297 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ebedabf-6ef4-463c-98d0-d2afea402f61-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.206309 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9rh2\" (UniqueName: \"kubernetes.io/projected/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e-kube-api-access-v9rh2\") on node \"crc\" DevicePath \"\"" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.209913 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ebedabf-6ef4-463c-98d0-d2afea402f61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ebedabf-6ef4-463c-98d0-d2afea402f61" (UID: "8ebedabf-6ef4-463c-98d0-d2afea402f61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.307757 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ebedabf-6ef4-463c-98d0-d2afea402f61-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.460157 4768 generic.go:334] "Generic (PLEG): container finished" podID="8ebedabf-6ef4-463c-98d0-d2afea402f61" containerID="1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443" exitCode=0 Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.460271 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd76t" event={"ID":"8ebedabf-6ef4-463c-98d0-d2afea402f61","Type":"ContainerDied","Data":"1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443"} Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.460307 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd76t" event={"ID":"8ebedabf-6ef4-463c-98d0-d2afea402f61","Type":"ContainerDied","Data":"d9fce55144a1cd9130935b3075c55ec127f78561fa3327ac66a5e0c382c07901"} Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.460326 4768 scope.go:117] "RemoveContainer" containerID="1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.460335 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd76t" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.462339 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" event={"ID":"991cdac9-8e35-4e4d-bba0-f1aa5cb5981e","Type":"ContainerDied","Data":"e974bca2a42f5aba7383cf58c827fc4b478dbb980516011be0ff5d3aaa223d23"} Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.462396 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e974bca2a42f5aba7383cf58c827fc4b478dbb980516011be0ff5d3aaa223d23" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.462957 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v" Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.505295 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cd76t"] Nov 24 18:15:20 crc kubenswrapper[4768]: I1124 18:15:20.520082 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cd76t"] Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.343061 4768 scope.go:117] "RemoveContainer" containerID="cb99899888429652d74695d8991f11ab8996ebf9abed298c9f682d2688e22846" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.360104 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh"] Nov 24 18:15:21 crc kubenswrapper[4768]: E1124 18:15:21.360728 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebedabf-6ef4-463c-98d0-d2afea402f61" containerName="extract-content" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.360754 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebedabf-6ef4-463c-98d0-d2afea402f61" containerName="extract-content" Nov 24 18:15:21 crc kubenswrapper[4768]: E1124 18:15:21.360768 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebedabf-6ef4-463c-98d0-d2afea402f61" containerName="registry-server" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.360776 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebedabf-6ef4-463c-98d0-d2afea402f61" containerName="registry-server" Nov 24 18:15:21 crc kubenswrapper[4768]: E1124 18:15:21.360793 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebedabf-6ef4-463c-98d0-d2afea402f61" containerName="extract-utilities" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.360803 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebedabf-6ef4-463c-98d0-d2afea402f61" containerName="extract-utilities" Nov 24 18:15:21 crc kubenswrapper[4768]: E1124 18:15:21.360836 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0753ff-e850-4c66-9e08-c71fe7a86f1d" containerName="collect-profiles" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.360845 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0753ff-e850-4c66-9e08-c71fe7a86f1d" containerName="collect-profiles" Nov 24 18:15:21 crc kubenswrapper[4768]: E1124 18:15:21.360867 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991cdac9-8e35-4e4d-bba0-f1aa5cb5981e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.360878 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="991cdac9-8e35-4e4d-bba0-f1aa5cb5981e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.361097 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="991cdac9-8e35-4e4d-bba0-f1aa5cb5981e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.361118 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0753ff-e850-4c66-9e08-c71fe7a86f1d" containerName="collect-profiles" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.361130 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ebedabf-6ef4-463c-98d0-d2afea402f61" containerName="registry-server" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.362257 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.365274 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.366275 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.366664 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.366896 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.387786 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh"] Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.402397 4768 scope.go:117] "RemoveContainer" containerID="24fe119e53c0519a15f09a8feac94de30b25783a40df6478c2ce163e160c8030" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.462159 4768 scope.go:117] "RemoveContainer" containerID="1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443" Nov 24 18:15:21 crc kubenswrapper[4768]: E1124 18:15:21.464384 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443\": container with ID starting with 1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443 not found: ID does not exist" containerID="1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.464455 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443"} err="failed to get container status \"1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443\": rpc error: code = NotFound desc = could not find container \"1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443\": container with ID starting with 1f2c3b2acef081b0ce1b5150fbd0132cc7bed486c07d1bc6ae5b944b15c3f443 not found: ID does not exist" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.464530 4768 scope.go:117] "RemoveContainer" containerID="cb99899888429652d74695d8991f11ab8996ebf9abed298c9f682d2688e22846" Nov 24 18:15:21 crc kubenswrapper[4768]: E1124 18:15:21.464955 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb99899888429652d74695d8991f11ab8996ebf9abed298c9f682d2688e22846\": container with ID starting with cb99899888429652d74695d8991f11ab8996ebf9abed298c9f682d2688e22846 not found: ID does not exist" containerID="cb99899888429652d74695d8991f11ab8996ebf9abed298c9f682d2688e22846" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.465016 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb99899888429652d74695d8991f11ab8996ebf9abed298c9f682d2688e22846"} err="failed to get container status \"cb99899888429652d74695d8991f11ab8996ebf9abed298c9f682d2688e22846\": rpc error: code = NotFound desc = could not find container \"cb99899888429652d74695d8991f11ab8996ebf9abed298c9f682d2688e22846\": container with ID starting with cb99899888429652d74695d8991f11ab8996ebf9abed298c9f682d2688e22846 not found: ID does not exist" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.465050 4768 scope.go:117] "RemoveContainer" containerID="24fe119e53c0519a15f09a8feac94de30b25783a40df6478c2ce163e160c8030" Nov 24 18:15:21 crc kubenswrapper[4768]: E1124 18:15:21.465335 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24fe119e53c0519a15f09a8feac94de30b25783a40df6478c2ce163e160c8030\": container with ID starting with 24fe119e53c0519a15f09a8feac94de30b25783a40df6478c2ce163e160c8030 not found: ID does not exist" containerID="24fe119e53c0519a15f09a8feac94de30b25783a40df6478c2ce163e160c8030" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.465356 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24fe119e53c0519a15f09a8feac94de30b25783a40df6478c2ce163e160c8030"} err="failed to get container status \"24fe119e53c0519a15f09a8feac94de30b25783a40df6478c2ce163e160c8030\": rpc error: code = NotFound desc = could not find container \"24fe119e53c0519a15f09a8feac94de30b25783a40df6478c2ce163e160c8030\": container with ID starting with 24fe119e53c0519a15f09a8feac94de30b25783a40df6478c2ce163e160c8030 not found: ID does not exist" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.512961 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh\" (UID: \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.513438 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfgt2\" (UniqueName: \"kubernetes.io/projected/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-kube-api-access-jfgt2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh\" (UID: \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.513517 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh\" (UID: \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.614974 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh\" (UID: \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.615130 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh\" (UID: \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.615304 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfgt2\" (UniqueName: \"kubernetes.io/projected/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-kube-api-access-jfgt2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh\" (UID: \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.623508 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh\" (UID: \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.632977 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh\" (UID: \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.634961 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfgt2\" (UniqueName: \"kubernetes.io/projected/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-kube-api-access-jfgt2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh\" (UID: \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.772060 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:21 crc kubenswrapper[4768]: I1124 18:15:21.919150 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ebedabf-6ef4-463c-98d0-d2afea402f61" path="/var/lib/kubelet/pods/8ebedabf-6ef4-463c-98d0-d2afea402f61/volumes" Nov 24 18:15:22 crc kubenswrapper[4768]: I1124 18:15:22.332055 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh"] Nov 24 18:15:22 crc kubenswrapper[4768]: W1124 18:15:22.336019 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c7ac3ba_8436_4a5b_8da6_44b1ba7ea849.slice/crio-b322c1e0e2075aac185a9b48c26bbc59fe8bb17d766bcde2ae488d3e325cf888 WatchSource:0}: Error finding container b322c1e0e2075aac185a9b48c26bbc59fe8bb17d766bcde2ae488d3e325cf888: Status 404 returned error can't find the container with id b322c1e0e2075aac185a9b48c26bbc59fe8bb17d766bcde2ae488d3e325cf888 Nov 24 18:15:22 crc kubenswrapper[4768]: I1124 18:15:22.496017 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" event={"ID":"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849","Type":"ContainerStarted","Data":"b322c1e0e2075aac185a9b48c26bbc59fe8bb17d766bcde2ae488d3e325cf888"} Nov 24 18:15:22 crc kubenswrapper[4768]: I1124 18:15:22.860132 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:15:23 crc kubenswrapper[4768]: I1124 18:15:23.506565 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" event={"ID":"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849","Type":"ContainerStarted","Data":"9c69262bc637d665057ffdf7d9990b1722d521d791c0320c9e8195b54eed1578"} Nov 24 18:15:23 crc kubenswrapper[4768]: I1124 18:15:23.526741 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" podStartSLOduration=3.009616028 podStartE2EDuration="3.52671921s" podCreationTimestamp="2025-11-24 18:15:20 +0000 UTC" firstStartedPulling="2025-11-24 18:15:22.338789717 +0000 UTC m=+1561.199371494" lastFinishedPulling="2025-11-24 18:15:22.855892889 +0000 UTC m=+1561.716474676" observedRunningTime="2025-11-24 18:15:23.523794078 +0000 UTC m=+1562.384375865" watchObservedRunningTime="2025-11-24 18:15:23.52671921 +0000 UTC m=+1562.387300987" Nov 24 18:15:27 crc kubenswrapper[4768]: I1124 18:15:27.898071 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:15:27 crc kubenswrapper[4768]: E1124 18:15:27.899083 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:15:28 crc kubenswrapper[4768]: I1124 18:15:28.604114 4768 generic.go:334] "Generic (PLEG): container finished" podID="6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849" containerID="9c69262bc637d665057ffdf7d9990b1722d521d791c0320c9e8195b54eed1578" exitCode=0 Nov 24 18:15:28 crc kubenswrapper[4768]: I1124 18:15:28.604238 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" event={"ID":"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849","Type":"ContainerDied","Data":"9c69262bc637d665057ffdf7d9990b1722d521d791c0320c9e8195b54eed1578"} Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.079616 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.231444 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfgt2\" (UniqueName: \"kubernetes.io/projected/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-kube-api-access-jfgt2\") pod \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\" (UID: \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\") " Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.231755 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-ssh-key\") pod \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\" (UID: \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\") " Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.232085 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-inventory\") pod \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\" (UID: \"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849\") " Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.239854 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-kube-api-access-jfgt2" (OuterVolumeSpecName: "kube-api-access-jfgt2") pod "6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849" (UID: "6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849"). InnerVolumeSpecName "kube-api-access-jfgt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.259965 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849" (UID: "6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.261776 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-inventory" (OuterVolumeSpecName: "inventory") pod "6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849" (UID: "6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.334693 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.334767 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfgt2\" (UniqueName: \"kubernetes.io/projected/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-kube-api-access-jfgt2\") on node \"crc\" DevicePath \"\"" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.334788 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.630771 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" event={"ID":"6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849","Type":"ContainerDied","Data":"b322c1e0e2075aac185a9b48c26bbc59fe8bb17d766bcde2ae488d3e325cf888"} Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.630841 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b322c1e0e2075aac185a9b48c26bbc59fe8bb17d766bcde2ae488d3e325cf888" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.630925 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.708658 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn"] Nov 24 18:15:30 crc kubenswrapper[4768]: E1124 18:15:30.709424 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.709446 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.709999 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.710657 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.713246 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.713947 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.714031 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.714460 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.718697 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn"] Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.844452 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9036b15a-a981-414b-bb2f-dfc6c951f45a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vhwwn\" (UID: \"9036b15a-a981-414b-bb2f-dfc6c951f45a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.844634 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h89s5\" (UniqueName: \"kubernetes.io/projected/9036b15a-a981-414b-bb2f-dfc6c951f45a-kube-api-access-h89s5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vhwwn\" (UID: \"9036b15a-a981-414b-bb2f-dfc6c951f45a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.844691 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9036b15a-a981-414b-bb2f-dfc6c951f45a-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vhwwn\" (UID: \"9036b15a-a981-414b-bb2f-dfc6c951f45a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.947031 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9036b15a-a981-414b-bb2f-dfc6c951f45a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vhwwn\" (UID: \"9036b15a-a981-414b-bb2f-dfc6c951f45a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.947126 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h89s5\" (UniqueName: \"kubernetes.io/projected/9036b15a-a981-414b-bb2f-dfc6c951f45a-kube-api-access-h89s5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vhwwn\" (UID: \"9036b15a-a981-414b-bb2f-dfc6c951f45a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.947168 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9036b15a-a981-414b-bb2f-dfc6c951f45a-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vhwwn\" (UID: \"9036b15a-a981-414b-bb2f-dfc6c951f45a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.958832 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9036b15a-a981-414b-bb2f-dfc6c951f45a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vhwwn\" (UID: \"9036b15a-a981-414b-bb2f-dfc6c951f45a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.961808 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9036b15a-a981-414b-bb2f-dfc6c951f45a-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vhwwn\" (UID: \"9036b15a-a981-414b-bb2f-dfc6c951f45a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:15:30 crc kubenswrapper[4768]: I1124 18:15:30.966965 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h89s5\" (UniqueName: \"kubernetes.io/projected/9036b15a-a981-414b-bb2f-dfc6c951f45a-kube-api-access-h89s5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vhwwn\" (UID: \"9036b15a-a981-414b-bb2f-dfc6c951f45a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:15:31 crc kubenswrapper[4768]: I1124 18:15:31.033630 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:15:31 crc kubenswrapper[4768]: I1124 18:15:31.428829 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn"] Nov 24 18:15:31 crc kubenswrapper[4768]: W1124 18:15:31.434922 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9036b15a_a981_414b_bb2f_dfc6c951f45a.slice/crio-749f9b8492f575c8a47b5921ece4ead30de9d925ae4937f04e9f0d50ccabe847 WatchSource:0}: Error finding container 749f9b8492f575c8a47b5921ece4ead30de9d925ae4937f04e9f0d50ccabe847: Status 404 returned error can't find the container with id 749f9b8492f575c8a47b5921ece4ead30de9d925ae4937f04e9f0d50ccabe847 Nov 24 18:15:31 crc kubenswrapper[4768]: I1124 18:15:31.642744 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" event={"ID":"9036b15a-a981-414b-bb2f-dfc6c951f45a","Type":"ContainerStarted","Data":"749f9b8492f575c8a47b5921ece4ead30de9d925ae4937f04e9f0d50ccabe847"} Nov 24 18:15:32 crc kubenswrapper[4768]: I1124 18:15:32.658572 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" event={"ID":"9036b15a-a981-414b-bb2f-dfc6c951f45a","Type":"ContainerStarted","Data":"429480a2ea66dc04bc3d43f98f64beb9e3240c5c33c2f54b7498e4b42367bccf"} Nov 24 18:15:32 crc kubenswrapper[4768]: I1124 18:15:32.715093 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" podStartSLOduration=1.829336474 podStartE2EDuration="2.715076552s" podCreationTimestamp="2025-11-24 18:15:30 +0000 UTC" firstStartedPulling="2025-11-24 18:15:31.438703946 +0000 UTC m=+1570.299285723" lastFinishedPulling="2025-11-24 18:15:32.324444024 +0000 UTC m=+1571.185025801" observedRunningTime="2025-11-24 18:15:32.711346965 +0000 UTC m=+1571.571928752" watchObservedRunningTime="2025-11-24 18:15:32.715076552 +0000 UTC m=+1571.575658319" Nov 24 18:15:38 crc kubenswrapper[4768]: I1124 18:15:38.898914 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:15:38 crc kubenswrapper[4768]: E1124 18:15:38.899706 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:15:41 crc kubenswrapper[4768]: I1124 18:15:41.043713 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5bc6-account-create-pktcg"] Nov 24 18:15:41 crc kubenswrapper[4768]: I1124 18:15:41.053070 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rr57f"] Nov 24 18:15:41 crc kubenswrapper[4768]: I1124 18:15:41.061627 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rr57f"] Nov 24 18:15:41 crc kubenswrapper[4768]: I1124 18:15:41.069908 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5bc6-account-create-pktcg"] Nov 24 18:15:41 crc kubenswrapper[4768]: I1124 18:15:41.909374 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61443b8e-bd3a-437e-8440-323561bc319b" path="/var/lib/kubelet/pods/61443b8e-bd3a-437e-8440-323561bc319b/volumes" Nov 24 18:15:41 crc kubenswrapper[4768]: I1124 18:15:41.910219 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab99221a-fafe-469a-a7e1-3355f432075e" path="/var/lib/kubelet/pods/ab99221a-fafe-469a-a7e1-3355f432075e/volumes" Nov 24 18:15:46 crc kubenswrapper[4768]: I1124 18:15:46.034673 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tlqfl"] Nov 24 18:15:46 crc kubenswrapper[4768]: I1124 18:15:46.043708 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tlqfl"] Nov 24 18:15:46 crc kubenswrapper[4768]: I1124 18:15:46.052010 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8f20-account-create-b25tz"] Nov 24 18:15:46 crc kubenswrapper[4768]: I1124 18:15:46.059377 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8f20-account-create-b25tz"] Nov 24 18:15:47 crc kubenswrapper[4768]: I1124 18:15:47.201966 4768 scope.go:117] "RemoveContainer" containerID="4dfff93858a5196489734d7e1c0d2b60a4876101d5d156beb41ed593099ac4b3" Nov 24 18:15:47 crc kubenswrapper[4768]: I1124 18:15:47.225835 4768 scope.go:117] "RemoveContainer" containerID="a169a73070d4287c5b74c781368efec82b49ac4e6b5f372cf041e2a54b5af230" Nov 24 18:15:47 crc kubenswrapper[4768]: I1124 18:15:47.912336 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52affb5e-149e-4868-a48d-4f4ab569947a" path="/var/lib/kubelet/pods/52affb5e-149e-4868-a48d-4f4ab569947a/volumes" Nov 24 18:15:47 crc kubenswrapper[4768]: I1124 18:15:47.913740 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa4d5295-ba8b-4369-a191-2e51f0cf1d51" path="/var/lib/kubelet/pods/fa4d5295-ba8b-4369-a191-2e51f0cf1d51/volumes" Nov 24 18:15:51 crc kubenswrapper[4768]: I1124 18:15:51.039340 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1661-account-create-6bzhc"] Nov 24 18:15:51 crc kubenswrapper[4768]: I1124 18:15:51.052560 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-dt8hn"] Nov 24 18:15:51 crc kubenswrapper[4768]: I1124 18:15:51.062855 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-dt8hn"] Nov 24 18:15:51 crc kubenswrapper[4768]: I1124 18:15:51.073461 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1661-account-create-6bzhc"] Nov 24 18:15:51 crc kubenswrapper[4768]: I1124 18:15:51.909767 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d0123e4-321e-46c6-9fad-ab2860c14050" path="/var/lib/kubelet/pods/5d0123e4-321e-46c6-9fad-ab2860c14050/volumes" Nov 24 18:15:51 crc kubenswrapper[4768]: I1124 18:15:51.910636 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae89b236-a8cc-49bc-8ad3-6601f4b97450" path="/var/lib/kubelet/pods/ae89b236-a8cc-49bc-8ad3-6601f4b97450/volumes" Nov 24 18:15:53 crc kubenswrapper[4768]: I1124 18:15:53.899654 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:15:53 crc kubenswrapper[4768]: E1124 18:15:53.900935 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:16:07 crc kubenswrapper[4768]: I1124 18:16:07.899203 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:16:07 crc kubenswrapper[4768]: E1124 18:16:07.900297 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:16:08 crc kubenswrapper[4768]: I1124 18:16:08.032020 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9e79-account-create-r62zt"] Nov 24 18:16:08 crc kubenswrapper[4768]: I1124 18:16:08.043288 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-fwkxc"] Nov 24 18:16:08 crc kubenswrapper[4768]: I1124 18:16:08.054946 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9e79-account-create-r62zt"] Nov 24 18:16:08 crc kubenswrapper[4768]: I1124 18:16:08.065850 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-af38-account-create-rns5f"] Nov 24 18:16:08 crc kubenswrapper[4768]: I1124 18:16:08.073185 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-fwkxc"] Nov 24 18:16:08 crc kubenswrapper[4768]: I1124 18:16:08.079391 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-af38-account-create-rns5f"] Nov 24 18:16:08 crc kubenswrapper[4768]: I1124 18:16:08.086426 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fjmk9"] Nov 24 18:16:08 crc kubenswrapper[4768]: I1124 18:16:08.093323 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fjmk9"] Nov 24 18:16:09 crc kubenswrapper[4768]: I1124 18:16:09.049111 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-05c6-account-create-dxr7l"] Nov 24 18:16:09 crc kubenswrapper[4768]: I1124 18:16:09.058618 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-05c6-account-create-dxr7l"] Nov 24 18:16:09 crc kubenswrapper[4768]: I1124 18:16:09.912306 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="382cc76c-7ba2-45f4-898c-10608b068c36" path="/var/lib/kubelet/pods/382cc76c-7ba2-45f4-898c-10608b068c36/volumes" Nov 24 18:16:09 crc kubenswrapper[4768]: I1124 18:16:09.912930 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="557ae8bd-5ad0-4822-bff1-6274e4523aa0" path="/var/lib/kubelet/pods/557ae8bd-5ad0-4822-bff1-6274e4523aa0/volumes" Nov 24 18:16:09 crc kubenswrapper[4768]: I1124 18:16:09.913578 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96d3e000-d092-48ca-bf36-ecbb55cf016b" path="/var/lib/kubelet/pods/96d3e000-d092-48ca-bf36-ecbb55cf016b/volumes" Nov 24 18:16:09 crc kubenswrapper[4768]: I1124 18:16:09.914216 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7e1e485-bf18-48d5-bb34-f213b5680994" path="/var/lib/kubelet/pods/b7e1e485-bf18-48d5-bb34-f213b5680994/volumes" Nov 24 18:16:09 crc kubenswrapper[4768]: I1124 18:16:09.915270 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3159486-5491-4468-b849-04e91c41b248" path="/var/lib/kubelet/pods/c3159486-5491-4468-b849-04e91c41b248/volumes" Nov 24 18:16:09 crc kubenswrapper[4768]: I1124 18:16:09.972531 4768 generic.go:334] "Generic (PLEG): container finished" podID="9036b15a-a981-414b-bb2f-dfc6c951f45a" containerID="429480a2ea66dc04bc3d43f98f64beb9e3240c5c33c2f54b7498e4b42367bccf" exitCode=0 Nov 24 18:16:09 crc kubenswrapper[4768]: I1124 18:16:09.972579 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" event={"ID":"9036b15a-a981-414b-bb2f-dfc6c951f45a","Type":"ContainerDied","Data":"429480a2ea66dc04bc3d43f98f64beb9e3240c5c33c2f54b7498e4b42367bccf"} Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.055963 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rknvx"] Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.063968 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rknvx"] Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.346092 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.536243 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h89s5\" (UniqueName: \"kubernetes.io/projected/9036b15a-a981-414b-bb2f-dfc6c951f45a-kube-api-access-h89s5\") pod \"9036b15a-a981-414b-bb2f-dfc6c951f45a\" (UID: \"9036b15a-a981-414b-bb2f-dfc6c951f45a\") " Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.536642 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9036b15a-a981-414b-bb2f-dfc6c951f45a-inventory\") pod \"9036b15a-a981-414b-bb2f-dfc6c951f45a\" (UID: \"9036b15a-a981-414b-bb2f-dfc6c951f45a\") " Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.536752 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9036b15a-a981-414b-bb2f-dfc6c951f45a-ssh-key\") pod \"9036b15a-a981-414b-bb2f-dfc6c951f45a\" (UID: \"9036b15a-a981-414b-bb2f-dfc6c951f45a\") " Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.541433 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9036b15a-a981-414b-bb2f-dfc6c951f45a-kube-api-access-h89s5" (OuterVolumeSpecName: "kube-api-access-h89s5") pod "9036b15a-a981-414b-bb2f-dfc6c951f45a" (UID: "9036b15a-a981-414b-bb2f-dfc6c951f45a"). InnerVolumeSpecName "kube-api-access-h89s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.560993 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9036b15a-a981-414b-bb2f-dfc6c951f45a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9036b15a-a981-414b-bb2f-dfc6c951f45a" (UID: "9036b15a-a981-414b-bb2f-dfc6c951f45a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.565931 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9036b15a-a981-414b-bb2f-dfc6c951f45a-inventory" (OuterVolumeSpecName: "inventory") pod "9036b15a-a981-414b-bb2f-dfc6c951f45a" (UID: "9036b15a-a981-414b-bb2f-dfc6c951f45a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.639590 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h89s5\" (UniqueName: \"kubernetes.io/projected/9036b15a-a981-414b-bb2f-dfc6c951f45a-kube-api-access-h89s5\") on node \"crc\" DevicePath \"\"" Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.639620 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9036b15a-a981-414b-bb2f-dfc6c951f45a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.639629 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9036b15a-a981-414b-bb2f-dfc6c951f45a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.914969 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0272e837-2dbf-4eca-bbf5-c33af7822bd2" path="/var/lib/kubelet/pods/0272e837-2dbf-4eca-bbf5-c33af7822bd2/volumes" Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.994516 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" event={"ID":"9036b15a-a981-414b-bb2f-dfc6c951f45a","Type":"ContainerDied","Data":"749f9b8492f575c8a47b5921ece4ead30de9d925ae4937f04e9f0d50ccabe847"} Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.994577 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="749f9b8492f575c8a47b5921ece4ead30de9d925ae4937f04e9f0d50ccabe847" Nov 24 18:16:11 crc kubenswrapper[4768]: I1124 18:16:11.994674 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.056641 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-q6fzj"] Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.066690 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-q6fzj"] Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.091352 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp"] Nov 24 18:16:12 crc kubenswrapper[4768]: E1124 18:16:12.091734 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9036b15a-a981-414b-bb2f-dfc6c951f45a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.091775 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9036b15a-a981-414b-bb2f-dfc6c951f45a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.091980 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9036b15a-a981-414b-bb2f-dfc6c951f45a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.092602 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.095984 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.096312 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.097286 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.103801 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.104257 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp"] Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.252233 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80f9e8fa-639a-4ac6-9a56-437263b9f342-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp\" (UID: \"80f9e8fa-639a-4ac6-9a56-437263b9f342\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.253099 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssx48\" (UniqueName: \"kubernetes.io/projected/80f9e8fa-639a-4ac6-9a56-437263b9f342-kube-api-access-ssx48\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp\" (UID: \"80f9e8fa-639a-4ac6-9a56-437263b9f342\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.253295 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80f9e8fa-639a-4ac6-9a56-437263b9f342-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp\" (UID: \"80f9e8fa-639a-4ac6-9a56-437263b9f342\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.356573 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80f9e8fa-639a-4ac6-9a56-437263b9f342-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp\" (UID: \"80f9e8fa-639a-4ac6-9a56-437263b9f342\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.356694 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80f9e8fa-639a-4ac6-9a56-437263b9f342-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp\" (UID: \"80f9e8fa-639a-4ac6-9a56-437263b9f342\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.356896 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssx48\" (UniqueName: \"kubernetes.io/projected/80f9e8fa-639a-4ac6-9a56-437263b9f342-kube-api-access-ssx48\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp\" (UID: \"80f9e8fa-639a-4ac6-9a56-437263b9f342\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.364636 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80f9e8fa-639a-4ac6-9a56-437263b9f342-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp\" (UID: \"80f9e8fa-639a-4ac6-9a56-437263b9f342\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.364636 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80f9e8fa-639a-4ac6-9a56-437263b9f342-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp\" (UID: \"80f9e8fa-639a-4ac6-9a56-437263b9f342\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.374302 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssx48\" (UniqueName: \"kubernetes.io/projected/80f9e8fa-639a-4ac6-9a56-437263b9f342-kube-api-access-ssx48\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp\" (UID: \"80f9e8fa-639a-4ac6-9a56-437263b9f342\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.413023 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.934859 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp"] Nov 24 18:16:12 crc kubenswrapper[4768]: I1124 18:16:12.942088 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 18:16:13 crc kubenswrapper[4768]: I1124 18:16:13.007084 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" event={"ID":"80f9e8fa-639a-4ac6-9a56-437263b9f342","Type":"ContainerStarted","Data":"4104ec67ab1d16a13d2d2a337da894f3ca7fa3259fc0bfa3daf6a16793b29938"} Nov 24 18:16:13 crc kubenswrapper[4768]: I1124 18:16:13.909518 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3c858f-af78-4df6-b30a-b7921b5a80f3" path="/var/lib/kubelet/pods/6d3c858f-af78-4df6-b30a-b7921b5a80f3/volumes" Nov 24 18:16:14 crc kubenswrapper[4768]: I1124 18:16:14.015997 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" event={"ID":"80f9e8fa-639a-4ac6-9a56-437263b9f342","Type":"ContainerStarted","Data":"1fa8c1a58e7ebc61eed2c7259d87ba0fd07ba89ea54ad0b48ad55b1964b7fccb"} Nov 24 18:16:14 crc kubenswrapper[4768]: I1124 18:16:14.033414 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" podStartSLOduration=1.456166763 podStartE2EDuration="2.033386566s" podCreationTimestamp="2025-11-24 18:16:12 +0000 UTC" firstStartedPulling="2025-11-24 18:16:12.941670603 +0000 UTC m=+1611.802252390" lastFinishedPulling="2025-11-24 18:16:13.518890416 +0000 UTC m=+1612.379472193" observedRunningTime="2025-11-24 18:16:14.029646111 +0000 UTC m=+1612.890227918" watchObservedRunningTime="2025-11-24 18:16:14.033386566 +0000 UTC m=+1612.893968363" Nov 24 18:16:18 crc kubenswrapper[4768]: I1124 18:16:18.056118 4768 generic.go:334] "Generic (PLEG): container finished" podID="80f9e8fa-639a-4ac6-9a56-437263b9f342" containerID="1fa8c1a58e7ebc61eed2c7259d87ba0fd07ba89ea54ad0b48ad55b1964b7fccb" exitCode=0 Nov 24 18:16:18 crc kubenswrapper[4768]: I1124 18:16:18.056447 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" event={"ID":"80f9e8fa-639a-4ac6-9a56-437263b9f342","Type":"ContainerDied","Data":"1fa8c1a58e7ebc61eed2c7259d87ba0fd07ba89ea54ad0b48ad55b1964b7fccb"} Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.037060 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qqlpp"] Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.045327 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qqlpp"] Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.455072 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.605909 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssx48\" (UniqueName: \"kubernetes.io/projected/80f9e8fa-639a-4ac6-9a56-437263b9f342-kube-api-access-ssx48\") pod \"80f9e8fa-639a-4ac6-9a56-437263b9f342\" (UID: \"80f9e8fa-639a-4ac6-9a56-437263b9f342\") " Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.605957 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80f9e8fa-639a-4ac6-9a56-437263b9f342-ssh-key\") pod \"80f9e8fa-639a-4ac6-9a56-437263b9f342\" (UID: \"80f9e8fa-639a-4ac6-9a56-437263b9f342\") " Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.606175 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80f9e8fa-639a-4ac6-9a56-437263b9f342-inventory\") pod \"80f9e8fa-639a-4ac6-9a56-437263b9f342\" (UID: \"80f9e8fa-639a-4ac6-9a56-437263b9f342\") " Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.616937 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f9e8fa-639a-4ac6-9a56-437263b9f342-kube-api-access-ssx48" (OuterVolumeSpecName: "kube-api-access-ssx48") pod "80f9e8fa-639a-4ac6-9a56-437263b9f342" (UID: "80f9e8fa-639a-4ac6-9a56-437263b9f342"). InnerVolumeSpecName "kube-api-access-ssx48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.634229 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f9e8fa-639a-4ac6-9a56-437263b9f342-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "80f9e8fa-639a-4ac6-9a56-437263b9f342" (UID: "80f9e8fa-639a-4ac6-9a56-437263b9f342"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.635072 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f9e8fa-639a-4ac6-9a56-437263b9f342-inventory" (OuterVolumeSpecName: "inventory") pod "80f9e8fa-639a-4ac6-9a56-437263b9f342" (UID: "80f9e8fa-639a-4ac6-9a56-437263b9f342"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.708108 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80f9e8fa-639a-4ac6-9a56-437263b9f342-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.708452 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssx48\" (UniqueName: \"kubernetes.io/projected/80f9e8fa-639a-4ac6-9a56-437263b9f342-kube-api-access-ssx48\") on node \"crc\" DevicePath \"\"" Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.708465 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80f9e8fa-639a-4ac6-9a56-437263b9f342-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.899423 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:16:19 crc kubenswrapper[4768]: E1124 18:16:19.899894 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:16:19 crc kubenswrapper[4768]: I1124 18:16:19.916318 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e445a9-a207-41eb-816d-de70c981c8c2" path="/var/lib/kubelet/pods/65e445a9-a207-41eb-816d-de70c981c8c2/volumes" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.079037 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" event={"ID":"80f9e8fa-639a-4ac6-9a56-437263b9f342","Type":"ContainerDied","Data":"4104ec67ab1d16a13d2d2a337da894f3ca7fa3259fc0bfa3daf6a16793b29938"} Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.079113 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4104ec67ab1d16a13d2d2a337da894f3ca7fa3259fc0bfa3daf6a16793b29938" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.079070 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.135766 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564"] Nov 24 18:16:20 crc kubenswrapper[4768]: E1124 18:16:20.136171 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f9e8fa-639a-4ac6-9a56-437263b9f342" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.136190 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f9e8fa-639a-4ac6-9a56-437263b9f342" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.136372 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f9e8fa-639a-4ac6-9a56-437263b9f342" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.136968 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.140007 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.140250 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.143974 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.144764 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.149398 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564"] Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.319156 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8027614-458f-4bf6-a0fd-931723d17b8c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vs564\" (UID: \"c8027614-458f-4bf6-a0fd-931723d17b8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.319217 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8027614-458f-4bf6-a0fd-931723d17b8c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vs564\" (UID: \"c8027614-458f-4bf6-a0fd-931723d17b8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.319371 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptxh8\" (UniqueName: \"kubernetes.io/projected/c8027614-458f-4bf6-a0fd-931723d17b8c-kube-api-access-ptxh8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vs564\" (UID: \"c8027614-458f-4bf6-a0fd-931723d17b8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.421555 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8027614-458f-4bf6-a0fd-931723d17b8c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vs564\" (UID: \"c8027614-458f-4bf6-a0fd-931723d17b8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.421749 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptxh8\" (UniqueName: \"kubernetes.io/projected/c8027614-458f-4bf6-a0fd-931723d17b8c-kube-api-access-ptxh8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vs564\" (UID: \"c8027614-458f-4bf6-a0fd-931723d17b8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.421901 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8027614-458f-4bf6-a0fd-931723d17b8c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vs564\" (UID: \"c8027614-458f-4bf6-a0fd-931723d17b8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.426971 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8027614-458f-4bf6-a0fd-931723d17b8c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vs564\" (UID: \"c8027614-458f-4bf6-a0fd-931723d17b8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.427991 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8027614-458f-4bf6-a0fd-931723d17b8c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vs564\" (UID: \"c8027614-458f-4bf6-a0fd-931723d17b8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.444141 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptxh8\" (UniqueName: \"kubernetes.io/projected/c8027614-458f-4bf6-a0fd-931723d17b8c-kube-api-access-ptxh8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vs564\" (UID: \"c8027614-458f-4bf6-a0fd-931723d17b8c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.458579 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:16:20 crc kubenswrapper[4768]: I1124 18:16:20.986706 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564"] Nov 24 18:16:21 crc kubenswrapper[4768]: I1124 18:16:21.087319 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" event={"ID":"c8027614-458f-4bf6-a0fd-931723d17b8c","Type":"ContainerStarted","Data":"dee83b877396cce0a503c4cd3ffd36d5ee997a9341398bdd800c7a1e1ae0bdde"} Nov 24 18:16:22 crc kubenswrapper[4768]: I1124 18:16:22.513453 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:16:24 crc kubenswrapper[4768]: I1124 18:16:24.122691 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" event={"ID":"c8027614-458f-4bf6-a0fd-931723d17b8c","Type":"ContainerStarted","Data":"61d78c2ada8c5af9b55dedcd7f435044d4457f456a22b042642028aa2bf5753a"} Nov 24 18:16:24 crc kubenswrapper[4768]: I1124 18:16:24.150029 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" podStartSLOduration=2.637215537 podStartE2EDuration="4.150009649s" podCreationTimestamp="2025-11-24 18:16:20 +0000 UTC" firstStartedPulling="2025-11-24 18:16:20.997450365 +0000 UTC m=+1619.858032142" lastFinishedPulling="2025-11-24 18:16:22.510244477 +0000 UTC m=+1621.370826254" observedRunningTime="2025-11-24 18:16:24.146907456 +0000 UTC m=+1623.007489233" watchObservedRunningTime="2025-11-24 18:16:24.150009649 +0000 UTC m=+1623.010591426" Nov 24 18:16:30 crc kubenswrapper[4768]: I1124 18:16:30.899567 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:16:30 crc kubenswrapper[4768]: E1124 18:16:30.900786 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:16:45 crc kubenswrapper[4768]: I1124 18:16:45.052764 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-55cpw"] Nov 24 18:16:45 crc kubenswrapper[4768]: I1124 18:16:45.060278 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-55cpw"] Nov 24 18:16:45 crc kubenswrapper[4768]: I1124 18:16:45.898554 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:16:45 crc kubenswrapper[4768]: E1124 18:16:45.899055 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:16:45 crc kubenswrapper[4768]: I1124 18:16:45.914819 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfd4bc52-bb80-45a4-8666-28e28e129c9e" path="/var/lib/kubelet/pods/dfd4bc52-bb80-45a4-8666-28e28e129c9e/volumes" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.352088 4768 scope.go:117] "RemoveContainer" containerID="11f244794baf49fd0d4b90ddcd02bc2fed357939076bd47a0dcff10fe7323daf" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.384657 4768 scope.go:117] "RemoveContainer" containerID="6ef4d6d26867bad5c71db8fe356bb1acabb8dcb554eecea630377aa8eafa3df9" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.437887 4768 scope.go:117] "RemoveContainer" containerID="d481d9d9973fd0b10ed8a5599ecd13af671880b54e2a552bb7f675baf9f44e88" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.475578 4768 scope.go:117] "RemoveContainer" containerID="c565963c12bbdeecd4b0562451d2fdc911dc48360cd7c249d9642fe77b841227" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.515244 4768 scope.go:117] "RemoveContainer" containerID="a48a584276e1be535d7f4be9a5516457657724a67c9945cc82c71fbe13a7e8df" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.582096 4768 scope.go:117] "RemoveContainer" containerID="ada2beb9e7d37911e3ac421d94c6a5979ba2df3ab7033a8dfa9ef7f3ac59d407" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.612957 4768 scope.go:117] "RemoveContainer" containerID="dc47d00201cad57cca1bbec85467872c252113a2bb20be002f376408a9bd60d3" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.649949 4768 scope.go:117] "RemoveContainer" containerID="fae19a5ef71d853c1657876c58cf71ca2f4bee33723872d93533aa2608ff41ba" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.683379 4768 scope.go:117] "RemoveContainer" containerID="7f581c1550e211cb9c91983d61d27d048cd945db28c3e2c53c13de1821fe0993" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.703681 4768 scope.go:117] "RemoveContainer" containerID="3f1429823adb11918549a411b384624645b80d96242d15a303ea7cb45600c115" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.724468 4768 scope.go:117] "RemoveContainer" containerID="d384579148efc59c12f00c3a32f6aa5e5cdad4836a558c8cb959bf635f7fd590" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.748079 4768 scope.go:117] "RemoveContainer" containerID="98d4b5745c9842133ca4de202d94ad1e0b87da5c8edd35c61ba1ada393f9112a" Nov 24 18:16:47 crc kubenswrapper[4768]: I1124 18:16:47.767464 4768 scope.go:117] "RemoveContainer" containerID="85c0bcee147f1445d747909ca66bc48a682424d8ecf1c3ecfb8bdff98dd20509" Nov 24 18:16:50 crc kubenswrapper[4768]: I1124 18:16:50.038175 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xs4kv"] Nov 24 18:16:50 crc kubenswrapper[4768]: I1124 18:16:50.047800 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ttdbg"] Nov 24 18:16:50 crc kubenswrapper[4768]: I1124 18:16:50.054395 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xs4kv"] Nov 24 18:16:50 crc kubenswrapper[4768]: I1124 18:16:50.061929 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ttdbg"] Nov 24 18:16:51 crc kubenswrapper[4768]: I1124 18:16:51.039154 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-rgvsd"] Nov 24 18:16:51 crc kubenswrapper[4768]: I1124 18:16:51.051702 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-rgvsd"] Nov 24 18:16:51 crc kubenswrapper[4768]: I1124 18:16:51.912253 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="093bb01a-1d6c-43cb-a0f0-7868857e241a" path="/var/lib/kubelet/pods/093bb01a-1d6c-43cb-a0f0-7868857e241a/volumes" Nov 24 18:16:51 crc kubenswrapper[4768]: I1124 18:16:51.913783 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="188f141f-b2a1-4ca5-b86d-ac1c6ea86163" path="/var/lib/kubelet/pods/188f141f-b2a1-4ca5-b86d-ac1c6ea86163/volumes" Nov 24 18:16:51 crc kubenswrapper[4768]: I1124 18:16:51.915407 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="738a244f-751e-4d50-8ba2-6a9d122b9a69" path="/var/lib/kubelet/pods/738a244f-751e-4d50-8ba2-6a9d122b9a69/volumes" Nov 24 18:16:58 crc kubenswrapper[4768]: I1124 18:16:58.900177 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:16:58 crc kubenswrapper[4768]: E1124 18:16:58.900947 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:17:07 crc kubenswrapper[4768]: I1124 18:17:07.050428 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wpggd"] Nov 24 18:17:07 crc kubenswrapper[4768]: I1124 18:17:07.079831 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wpggd"] Nov 24 18:17:07 crc kubenswrapper[4768]: I1124 18:17:07.910874 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ed13008-e82b-40d6-af72-abfb5a1223fb" path="/var/lib/kubelet/pods/8ed13008-e82b-40d6-af72-abfb5a1223fb/volumes" Nov 24 18:17:09 crc kubenswrapper[4768]: I1124 18:17:09.899620 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:17:09 crc kubenswrapper[4768]: E1124 18:17:09.900277 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:17:11 crc kubenswrapper[4768]: I1124 18:17:11.574641 4768 generic.go:334] "Generic (PLEG): container finished" podID="c8027614-458f-4bf6-a0fd-931723d17b8c" containerID="61d78c2ada8c5af9b55dedcd7f435044d4457f456a22b042642028aa2bf5753a" exitCode=0 Nov 24 18:17:11 crc kubenswrapper[4768]: I1124 18:17:11.575666 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" event={"ID":"c8027614-458f-4bf6-a0fd-931723d17b8c","Type":"ContainerDied","Data":"61d78c2ada8c5af9b55dedcd7f435044d4457f456a22b042642028aa2bf5753a"} Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.028461 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.150823 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8027614-458f-4bf6-a0fd-931723d17b8c-ssh-key\") pod \"c8027614-458f-4bf6-a0fd-931723d17b8c\" (UID: \"c8027614-458f-4bf6-a0fd-931723d17b8c\") " Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.150907 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptxh8\" (UniqueName: \"kubernetes.io/projected/c8027614-458f-4bf6-a0fd-931723d17b8c-kube-api-access-ptxh8\") pod \"c8027614-458f-4bf6-a0fd-931723d17b8c\" (UID: \"c8027614-458f-4bf6-a0fd-931723d17b8c\") " Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.151076 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8027614-458f-4bf6-a0fd-931723d17b8c-inventory\") pod \"c8027614-458f-4bf6-a0fd-931723d17b8c\" (UID: \"c8027614-458f-4bf6-a0fd-931723d17b8c\") " Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.156675 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8027614-458f-4bf6-a0fd-931723d17b8c-kube-api-access-ptxh8" (OuterVolumeSpecName: "kube-api-access-ptxh8") pod "c8027614-458f-4bf6-a0fd-931723d17b8c" (UID: "c8027614-458f-4bf6-a0fd-931723d17b8c"). InnerVolumeSpecName "kube-api-access-ptxh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.180931 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8027614-458f-4bf6-a0fd-931723d17b8c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c8027614-458f-4bf6-a0fd-931723d17b8c" (UID: "c8027614-458f-4bf6-a0fd-931723d17b8c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.181426 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8027614-458f-4bf6-a0fd-931723d17b8c-inventory" (OuterVolumeSpecName: "inventory") pod "c8027614-458f-4bf6-a0fd-931723d17b8c" (UID: "c8027614-458f-4bf6-a0fd-931723d17b8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.253427 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8027614-458f-4bf6-a0fd-931723d17b8c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.253456 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptxh8\" (UniqueName: \"kubernetes.io/projected/c8027614-458f-4bf6-a0fd-931723d17b8c-kube-api-access-ptxh8\") on node \"crc\" DevicePath \"\"" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.253470 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8027614-458f-4bf6-a0fd-931723d17b8c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.594560 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" event={"ID":"c8027614-458f-4bf6-a0fd-931723d17b8c","Type":"ContainerDied","Data":"dee83b877396cce0a503c4cd3ffd36d5ee997a9341398bdd800c7a1e1ae0bdde"} Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.594609 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dee83b877396cce0a503c4cd3ffd36d5ee997a9341398bdd800c7a1e1ae0bdde" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.594644 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.672274 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-95vn5"] Nov 24 18:17:13 crc kubenswrapper[4768]: E1124 18:17:13.672689 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8027614-458f-4bf6-a0fd-931723d17b8c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.672711 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8027614-458f-4bf6-a0fd-931723d17b8c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.672921 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8027614-458f-4bf6-a0fd-931723d17b8c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.673593 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.675700 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.676250 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.676999 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.677468 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.686143 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-95vn5"] Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.763784 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/88b5568c-b02c-4dc8-a356-a22d9f5815b8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-95vn5\" (UID: \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\") " pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.763850 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvqgl\" (UniqueName: \"kubernetes.io/projected/88b5568c-b02c-4dc8-a356-a22d9f5815b8-kube-api-access-gvqgl\") pod \"ssh-known-hosts-edpm-deployment-95vn5\" (UID: \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\") " pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.763898 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88b5568c-b02c-4dc8-a356-a22d9f5815b8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-95vn5\" (UID: \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\") " pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.865872 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/88b5568c-b02c-4dc8-a356-a22d9f5815b8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-95vn5\" (UID: \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\") " pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.865930 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvqgl\" (UniqueName: \"kubernetes.io/projected/88b5568c-b02c-4dc8-a356-a22d9f5815b8-kube-api-access-gvqgl\") pod \"ssh-known-hosts-edpm-deployment-95vn5\" (UID: \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\") " pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.865969 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88b5568c-b02c-4dc8-a356-a22d9f5815b8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-95vn5\" (UID: \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\") " pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.871422 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/88b5568c-b02c-4dc8-a356-a22d9f5815b8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-95vn5\" (UID: \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\") " pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.871613 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88b5568c-b02c-4dc8-a356-a22d9f5815b8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-95vn5\" (UID: \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\") " pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.888659 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvqgl\" (UniqueName: \"kubernetes.io/projected/88b5568c-b02c-4dc8-a356-a22d9f5815b8-kube-api-access-gvqgl\") pod \"ssh-known-hosts-edpm-deployment-95vn5\" (UID: \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\") " pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:13 crc kubenswrapper[4768]: I1124 18:17:13.991162 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:14 crc kubenswrapper[4768]: I1124 18:17:14.499010 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-95vn5"] Nov 24 18:17:14 crc kubenswrapper[4768]: I1124 18:17:14.604244 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" event={"ID":"88b5568c-b02c-4dc8-a356-a22d9f5815b8","Type":"ContainerStarted","Data":"12ec3b68a710efdf75470d582c5f6ab7f5267bfc442b725dc480311b237ab674"} Nov 24 18:17:15 crc kubenswrapper[4768]: I1124 18:17:15.629145 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" event={"ID":"88b5568c-b02c-4dc8-a356-a22d9f5815b8","Type":"ContainerStarted","Data":"a29ab38ec3305084257d701bd372b6c734ec329ace94a90731b4ffafa5a64890"} Nov 24 18:17:15 crc kubenswrapper[4768]: I1124 18:17:15.659778 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" podStartSLOduration=2.163440939 podStartE2EDuration="2.659757001s" podCreationTimestamp="2025-11-24 18:17:13 +0000 UTC" firstStartedPulling="2025-11-24 18:17:14.505572613 +0000 UTC m=+1673.366154390" lastFinishedPulling="2025-11-24 18:17:15.001888675 +0000 UTC m=+1673.862470452" observedRunningTime="2025-11-24 18:17:15.652623581 +0000 UTC m=+1674.513205368" watchObservedRunningTime="2025-11-24 18:17:15.659757001 +0000 UTC m=+1674.520338798" Nov 24 18:17:22 crc kubenswrapper[4768]: I1124 18:17:22.698704 4768 generic.go:334] "Generic (PLEG): container finished" podID="88b5568c-b02c-4dc8-a356-a22d9f5815b8" containerID="a29ab38ec3305084257d701bd372b6c734ec329ace94a90731b4ffafa5a64890" exitCode=0 Nov 24 18:17:22 crc kubenswrapper[4768]: I1124 18:17:22.699444 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" event={"ID":"88b5568c-b02c-4dc8-a356-a22d9f5815b8","Type":"ContainerDied","Data":"a29ab38ec3305084257d701bd372b6c734ec329ace94a90731b4ffafa5a64890"} Nov 24 18:17:23 crc kubenswrapper[4768]: I1124 18:17:23.898252 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:17:23 crc kubenswrapper[4768]: E1124 18:17:23.898791 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.079866 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.170238 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvqgl\" (UniqueName: \"kubernetes.io/projected/88b5568c-b02c-4dc8-a356-a22d9f5815b8-kube-api-access-gvqgl\") pod \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\" (UID: \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\") " Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.170386 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/88b5568c-b02c-4dc8-a356-a22d9f5815b8-inventory-0\") pod \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\" (UID: \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\") " Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.170407 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88b5568c-b02c-4dc8-a356-a22d9f5815b8-ssh-key-openstack-edpm-ipam\") pod \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\" (UID: \"88b5568c-b02c-4dc8-a356-a22d9f5815b8\") " Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.176020 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b5568c-b02c-4dc8-a356-a22d9f5815b8-kube-api-access-gvqgl" (OuterVolumeSpecName: "kube-api-access-gvqgl") pod "88b5568c-b02c-4dc8-a356-a22d9f5815b8" (UID: "88b5568c-b02c-4dc8-a356-a22d9f5815b8"). InnerVolumeSpecName "kube-api-access-gvqgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.198380 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b5568c-b02c-4dc8-a356-a22d9f5815b8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "88b5568c-b02c-4dc8-a356-a22d9f5815b8" (UID: "88b5568c-b02c-4dc8-a356-a22d9f5815b8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.201618 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b5568c-b02c-4dc8-a356-a22d9f5815b8-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "88b5568c-b02c-4dc8-a356-a22d9f5815b8" (UID: "88b5568c-b02c-4dc8-a356-a22d9f5815b8"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.272191 4768 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/88b5568c-b02c-4dc8-a356-a22d9f5815b8-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.272225 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88b5568c-b02c-4dc8-a356-a22d9f5815b8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.272238 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvqgl\" (UniqueName: \"kubernetes.io/projected/88b5568c-b02c-4dc8-a356-a22d9f5815b8-kube-api-access-gvqgl\") on node \"crc\" DevicePath \"\"" Nov 24 18:17:24 crc kubenswrapper[4768]: E1124 18:17:24.499993 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88b5568c_b02c_4dc8_a356_a22d9f5815b8.slice/crio-a29ab38ec3305084257d701bd372b6c734ec329ace94a90731b4ffafa5a64890.scope\": RecentStats: unable to find data in memory cache]" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.722676 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" event={"ID":"88b5568c-b02c-4dc8-a356-a22d9f5815b8","Type":"ContainerDied","Data":"12ec3b68a710efdf75470d582c5f6ab7f5267bfc442b725dc480311b237ab674"} Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.722715 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12ec3b68a710efdf75470d582c5f6ab7f5267bfc442b725dc480311b237ab674" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.722750 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-95vn5" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.795958 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd"] Nov 24 18:17:24 crc kubenswrapper[4768]: E1124 18:17:24.796375 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b5568c-b02c-4dc8-a356-a22d9f5815b8" containerName="ssh-known-hosts-edpm-deployment" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.796390 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b5568c-b02c-4dc8-a356-a22d9f5815b8" containerName="ssh-known-hosts-edpm-deployment" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.796596 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b5568c-b02c-4dc8-a356-a22d9f5815b8" containerName="ssh-known-hosts-edpm-deployment" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.797428 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.802968 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.803023 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.803031 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.803460 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.807033 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd"] Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.882600 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2jrcd\" (UID: \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.882678 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2jrcd\" (UID: \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.882779 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trlqh\" (UniqueName: \"kubernetes.io/projected/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-kube-api-access-trlqh\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2jrcd\" (UID: \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.984635 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2jrcd\" (UID: \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.984709 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2jrcd\" (UID: \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.984781 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trlqh\" (UniqueName: \"kubernetes.io/projected/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-kube-api-access-trlqh\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2jrcd\" (UID: \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.998244 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2jrcd\" (UID: \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:24 crc kubenswrapper[4768]: I1124 18:17:24.999052 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2jrcd\" (UID: \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:25 crc kubenswrapper[4768]: I1124 18:17:25.002211 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trlqh\" (UniqueName: \"kubernetes.io/projected/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-kube-api-access-trlqh\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2jrcd\" (UID: \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:25 crc kubenswrapper[4768]: I1124 18:17:25.118334 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:25 crc kubenswrapper[4768]: I1124 18:17:25.634599 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd"] Nov 24 18:17:25 crc kubenswrapper[4768]: W1124 18:17:25.642925 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode810de0d_60fb_474c_ba3f_1dd7f4cbc445.slice/crio-f84c61b200a67a153fa6f91058bcd08eb6a039041bbd0a9b98a19d72b6109daf WatchSource:0}: Error finding container f84c61b200a67a153fa6f91058bcd08eb6a039041bbd0a9b98a19d72b6109daf: Status 404 returned error can't find the container with id f84c61b200a67a153fa6f91058bcd08eb6a039041bbd0a9b98a19d72b6109daf Nov 24 18:17:25 crc kubenswrapper[4768]: I1124 18:17:25.733101 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" event={"ID":"e810de0d-60fb-474c-ba3f-1dd7f4cbc445","Type":"ContainerStarted","Data":"f84c61b200a67a153fa6f91058bcd08eb6a039041bbd0a9b98a19d72b6109daf"} Nov 24 18:17:26 crc kubenswrapper[4768]: I1124 18:17:26.745303 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" event={"ID":"e810de0d-60fb-474c-ba3f-1dd7f4cbc445","Type":"ContainerStarted","Data":"4a6213edac136ac0c5576344cdef037bf92046653801179bfbf1b691baa63956"} Nov 24 18:17:26 crc kubenswrapper[4768]: I1124 18:17:26.774193 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" podStartSLOduration=2.333478959 podStartE2EDuration="2.77416395s" podCreationTimestamp="2025-11-24 18:17:24 +0000 UTC" firstStartedPulling="2025-11-24 18:17:25.64532683 +0000 UTC m=+1684.505908607" lastFinishedPulling="2025-11-24 18:17:26.086011821 +0000 UTC m=+1684.946593598" observedRunningTime="2025-11-24 18:17:26.765544039 +0000 UTC m=+1685.626125846" watchObservedRunningTime="2025-11-24 18:17:26.77416395 +0000 UTC m=+1685.634745767" Nov 24 18:17:34 crc kubenswrapper[4768]: E1124 18:17:34.742822 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88b5568c_b02c_4dc8_a356_a22d9f5815b8.slice/crio-a29ab38ec3305084257d701bd372b6c734ec329ace94a90731b4ffafa5a64890.scope\": RecentStats: unable to find data in memory cache]" Nov 24 18:17:34 crc kubenswrapper[4768]: I1124 18:17:34.822944 4768 generic.go:334] "Generic (PLEG): container finished" podID="e810de0d-60fb-474c-ba3f-1dd7f4cbc445" containerID="4a6213edac136ac0c5576344cdef037bf92046653801179bfbf1b691baa63956" exitCode=0 Nov 24 18:17:34 crc kubenswrapper[4768]: I1124 18:17:34.822998 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" event={"ID":"e810de0d-60fb-474c-ba3f-1dd7f4cbc445","Type":"ContainerDied","Data":"4a6213edac136ac0c5576344cdef037bf92046653801179bfbf1b691baa63956"} Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.259132 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.399016 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trlqh\" (UniqueName: \"kubernetes.io/projected/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-kube-api-access-trlqh\") pod \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\" (UID: \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\") " Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.399528 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-ssh-key\") pod \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\" (UID: \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\") " Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.399571 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-inventory\") pod \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\" (UID: \"e810de0d-60fb-474c-ba3f-1dd7f4cbc445\") " Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.405586 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-kube-api-access-trlqh" (OuterVolumeSpecName: "kube-api-access-trlqh") pod "e810de0d-60fb-474c-ba3f-1dd7f4cbc445" (UID: "e810de0d-60fb-474c-ba3f-1dd7f4cbc445"). InnerVolumeSpecName "kube-api-access-trlqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.427156 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-inventory" (OuterVolumeSpecName: "inventory") pod "e810de0d-60fb-474c-ba3f-1dd7f4cbc445" (UID: "e810de0d-60fb-474c-ba3f-1dd7f4cbc445"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.441384 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e810de0d-60fb-474c-ba3f-1dd7f4cbc445" (UID: "e810de0d-60fb-474c-ba3f-1dd7f4cbc445"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.501988 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trlqh\" (UniqueName: \"kubernetes.io/projected/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-kube-api-access-trlqh\") on node \"crc\" DevicePath \"\"" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.502025 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.502037 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e810de0d-60fb-474c-ba3f-1dd7f4cbc445-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.843304 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" event={"ID":"e810de0d-60fb-474c-ba3f-1dd7f4cbc445","Type":"ContainerDied","Data":"f84c61b200a67a153fa6f91058bcd08eb6a039041bbd0a9b98a19d72b6109daf"} Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.843345 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.843345 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f84c61b200a67a153fa6f91058bcd08eb6a039041bbd0a9b98a19d72b6109daf" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.923840 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8"] Nov 24 18:17:36 crc kubenswrapper[4768]: E1124 18:17:36.924335 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e810de0d-60fb-474c-ba3f-1dd7f4cbc445" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.924354 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e810de0d-60fb-474c-ba3f-1dd7f4cbc445" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.924637 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e810de0d-60fb-474c-ba3f-1dd7f4cbc445" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.925325 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.929553 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.929688 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.929912 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.933970 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:17:36 crc kubenswrapper[4768]: I1124 18:17:36.940517 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8"] Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.012159 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8\" (UID: \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.012472 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8\" (UID: \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.012634 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rlxg\" (UniqueName: \"kubernetes.io/projected/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-kube-api-access-8rlxg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8\" (UID: \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.114457 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8\" (UID: \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.114530 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rlxg\" (UniqueName: \"kubernetes.io/projected/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-kube-api-access-8rlxg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8\" (UID: \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.114605 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8\" (UID: \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.118294 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8\" (UID: \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.118574 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8\" (UID: \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.132290 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rlxg\" (UniqueName: \"kubernetes.io/projected/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-kube-api-access-8rlxg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8\" (UID: \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.246453 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.766925 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8"] Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.853900 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" event={"ID":"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba","Type":"ContainerStarted","Data":"35d85297aa575d92d4cd5bfe9b099c1d6d5d443765d166fdc2c6eaa40f7a8779"} Nov 24 18:17:37 crc kubenswrapper[4768]: I1124 18:17:37.898598 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:17:37 crc kubenswrapper[4768]: E1124 18:17:37.898809 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:17:38 crc kubenswrapper[4768]: I1124 18:17:38.046872 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-w2rvq"] Nov 24 18:17:38 crc kubenswrapper[4768]: I1124 18:17:38.061289 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-976b-account-create-cmd8x"] Nov 24 18:17:38 crc kubenswrapper[4768]: I1124 18:17:38.071604 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-976b-account-create-cmd8x"] Nov 24 18:17:38 crc kubenswrapper[4768]: I1124 18:17:38.079329 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-w2rvq"] Nov 24 18:17:38 crc kubenswrapper[4768]: I1124 18:17:38.868225 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" event={"ID":"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba","Type":"ContainerStarted","Data":"3133f602100ddbe18f4a72133473eb2d8963717dc24c9ed14f2ccb112a2a2029"} Nov 24 18:17:38 crc kubenswrapper[4768]: I1124 18:17:38.898607 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" podStartSLOduration=2.282276202 podStartE2EDuration="2.898580036s" podCreationTimestamp="2025-11-24 18:17:36 +0000 UTC" firstStartedPulling="2025-11-24 18:17:37.774970617 +0000 UTC m=+1696.635552394" lastFinishedPulling="2025-11-24 18:17:38.391274411 +0000 UTC m=+1697.251856228" observedRunningTime="2025-11-24 18:17:38.89196992 +0000 UTC m=+1697.752551707" watchObservedRunningTime="2025-11-24 18:17:38.898580036 +0000 UTC m=+1697.759161843" Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.039515 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6ht94"] Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.054101 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8633-account-create-xgxhq"] Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.066950 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-a04b-account-create-5kp5p"] Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.073797 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-6w89b"] Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.080329 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8633-account-create-xgxhq"] Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.088721 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6ht94"] Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.095090 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-a04b-account-create-5kp5p"] Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.101227 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-6w89b"] Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.909307 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="060e3ec5-bc92-41ba-be28-81705247ed9f" path="/var/lib/kubelet/pods/060e3ec5-bc92-41ba-be28-81705247ed9f/volumes" Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.909902 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bc00587-656d-47e3-bfa1-a722e4a72f2c" path="/var/lib/kubelet/pods/5bc00587-656d-47e3-bfa1-a722e4a72f2c/volumes" Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.910407 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6" path="/var/lib/kubelet/pods/8410c6fc-2a1a-4c46-bd1b-ce4b923abaa6/volumes" Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.910924 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9017e12-9ea0-4e50-9723-980a39a62146" path="/var/lib/kubelet/pods/a9017e12-9ea0-4e50-9723-980a39a62146/volumes" Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.911890 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbff70dc-2806-4b63-abc3-f4e5f69babe1" path="/var/lib/kubelet/pods/bbff70dc-2806-4b63-abc3-f4e5f69babe1/volumes" Nov 24 18:17:39 crc kubenswrapper[4768]: I1124 18:17:39.912393 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe008faf-7594-433a-90ad-8317cfb54dd2" path="/var/lib/kubelet/pods/fe008faf-7594-433a-90ad-8317cfb54dd2/volumes" Nov 24 18:17:44 crc kubenswrapper[4768]: E1124 18:17:44.982876 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88b5568c_b02c_4dc8_a356_a22d9f5815b8.slice/crio-a29ab38ec3305084257d701bd372b6c734ec329ace94a90731b4ffafa5a64890.scope\": RecentStats: unable to find data in memory cache]" Nov 24 18:17:47 crc kubenswrapper[4768]: I1124 18:17:47.957780 4768 generic.go:334] "Generic (PLEG): container finished" podID="9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba" containerID="3133f602100ddbe18f4a72133473eb2d8963717dc24c9ed14f2ccb112a2a2029" exitCode=0 Nov 24 18:17:47 crc kubenswrapper[4768]: I1124 18:17:47.957858 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" event={"ID":"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba","Type":"ContainerDied","Data":"3133f602100ddbe18f4a72133473eb2d8963717dc24c9ed14f2ccb112a2a2029"} Nov 24 18:17:47 crc kubenswrapper[4768]: I1124 18:17:47.969412 4768 scope.go:117] "RemoveContainer" containerID="ea3271f5abd5164ffeb18b5ce7cbc8a2dfceb2c593ea4a75cbb1b4f17d56e371" Nov 24 18:17:48 crc kubenswrapper[4768]: I1124 18:17:48.002322 4768 scope.go:117] "RemoveContainer" containerID="29b4af4f0909cde5307e0e956fb1ef96b0bd1b9c913015bbf5519ff9ea20f89b" Nov 24 18:17:48 crc kubenswrapper[4768]: I1124 18:17:48.030088 4768 scope.go:117] "RemoveContainer" containerID="da3a805140c40a4c232a97e19b78bf8c4b43f7dc00c8e3187d6435086676ad45" Nov 24 18:17:48 crc kubenswrapper[4768]: I1124 18:17:48.091836 4768 scope.go:117] "RemoveContainer" containerID="0096f2a42ff3dfb6df3add48a790ac51d9369bb687b80fcb0267a293bb8cd248" Nov 24 18:17:48 crc kubenswrapper[4768]: I1124 18:17:48.153071 4768 scope.go:117] "RemoveContainer" containerID="b183fabfb27b7a8a35767088d942edbcf0f62f3af619354c9815de2624af25ac" Nov 24 18:17:48 crc kubenswrapper[4768]: I1124 18:17:48.173416 4768 scope.go:117] "RemoveContainer" containerID="adc5dccce10a7476d8ae5455c617731a4f73e5ae2a8e3d95dae4aaaa2a8365e2" Nov 24 18:17:48 crc kubenswrapper[4768]: I1124 18:17:48.211647 4768 scope.go:117] "RemoveContainer" containerID="f403f3cc33f9293e30e21fda3d14e53ba866dc2363087a278f538ee139276367" Nov 24 18:17:48 crc kubenswrapper[4768]: I1124 18:17:48.232084 4768 scope.go:117] "RemoveContainer" containerID="4948506a3e29d8d58f7d562879c95934f1cc8eb7d97e904510f8729bc30821ad" Nov 24 18:17:48 crc kubenswrapper[4768]: I1124 18:17:48.260531 4768 scope.go:117] "RemoveContainer" containerID="4a063b3dbd324dc8a5bc04c97bd708430fd95cb5c9bcbee43f3670fb789d12d7" Nov 24 18:17:48 crc kubenswrapper[4768]: I1124 18:17:48.299765 4768 scope.go:117] "RemoveContainer" containerID="bf0be984b8c32428988def99e6e7e0103a33e28e64b3345449a381d730e02c78" Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.393080 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.457057 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-inventory\") pod \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\" (UID: \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\") " Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.457219 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rlxg\" (UniqueName: \"kubernetes.io/projected/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-kube-api-access-8rlxg\") pod \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\" (UID: \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\") " Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.457332 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-ssh-key\") pod \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\" (UID: \"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba\") " Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.467694 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-kube-api-access-8rlxg" (OuterVolumeSpecName: "kube-api-access-8rlxg") pod "9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba" (UID: "9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba"). InnerVolumeSpecName "kube-api-access-8rlxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.486224 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-inventory" (OuterVolumeSpecName: "inventory") pod "9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba" (UID: "9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.490653 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba" (UID: "9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.559530 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rlxg\" (UniqueName: \"kubernetes.io/projected/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-kube-api-access-8rlxg\") on node \"crc\" DevicePath \"\"" Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.559563 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.559575 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.977037 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" event={"ID":"9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba","Type":"ContainerDied","Data":"35d85297aa575d92d4cd5bfe9b099c1d6d5d443765d166fdc2c6eaa40f7a8779"} Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.977083 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35d85297aa575d92d4cd5bfe9b099c1d6d5d443765d166fdc2c6eaa40f7a8779" Nov 24 18:17:49 crc kubenswrapper[4768]: I1124 18:17:49.977517 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8" Nov 24 18:17:50 crc kubenswrapper[4768]: I1124 18:17:50.899295 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:17:50 crc kubenswrapper[4768]: E1124 18:17:50.899643 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:17:55 crc kubenswrapper[4768]: E1124 18:17:55.219613 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88b5568c_b02c_4dc8_a356_a22d9f5815b8.slice/crio-a29ab38ec3305084257d701bd372b6c734ec329ace94a90731b4ffafa5a64890.scope\": RecentStats: unable to find data in memory cache]" Nov 24 18:18:05 crc kubenswrapper[4768]: E1124 18:18:05.436776 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88b5568c_b02c_4dc8_a356_a22d9f5815b8.slice/crio-a29ab38ec3305084257d701bd372b6c734ec329ace94a90731b4ffafa5a64890.scope\": RecentStats: unable to find data in memory cache]" Nov 24 18:18:05 crc kubenswrapper[4768]: I1124 18:18:05.899208 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:18:05 crc kubenswrapper[4768]: E1124 18:18:05.899548 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:18:15 crc kubenswrapper[4768]: E1124 18:18:15.657680 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88b5568c_b02c_4dc8_a356_a22d9f5815b8.slice/crio-a29ab38ec3305084257d701bd372b6c734ec329ace94a90731b4ffafa5a64890.scope\": RecentStats: unable to find data in memory cache]" Nov 24 18:18:16 crc kubenswrapper[4768]: I1124 18:18:16.045771 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cl9zb"] Nov 24 18:18:16 crc kubenswrapper[4768]: I1124 18:18:16.054780 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cl9zb"] Nov 24 18:18:17 crc kubenswrapper[4768]: I1124 18:18:17.908134 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d87682ae-914f-4570-9faa-2031bdd70f29" path="/var/lib/kubelet/pods/d87682ae-914f-4570-9faa-2031bdd70f29/volumes" Nov 24 18:18:20 crc kubenswrapper[4768]: I1124 18:18:20.899559 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:18:20 crc kubenswrapper[4768]: E1124 18:18:20.900098 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:18:33 crc kubenswrapper[4768]: I1124 18:18:33.898050 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:18:33 crc kubenswrapper[4768]: E1124 18:18:33.898890 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:18:39 crc kubenswrapper[4768]: I1124 18:18:39.030982 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qgtgz"] Nov 24 18:18:39 crc kubenswrapper[4768]: I1124 18:18:39.038874 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qgtgz"] Nov 24 18:18:39 crc kubenswrapper[4768]: I1124 18:18:39.910707 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8def680f-a48e-4b0f-9941-0cbb8a626206" path="/var/lib/kubelet/pods/8def680f-a48e-4b0f-9941-0cbb8a626206/volumes" Nov 24 18:18:41 crc kubenswrapper[4768]: I1124 18:18:41.045688 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7rkgz"] Nov 24 18:18:41 crc kubenswrapper[4768]: I1124 18:18:41.055523 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7rkgz"] Nov 24 18:18:41 crc kubenswrapper[4768]: I1124 18:18:41.913746 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="304a3869-b79e-47bc-ad78-0a4a41868b4f" path="/var/lib/kubelet/pods/304a3869-b79e-47bc-ad78-0a4a41868b4f/volumes" Nov 24 18:18:45 crc kubenswrapper[4768]: I1124 18:18:45.898563 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:18:45 crc kubenswrapper[4768]: E1124 18:18:45.899165 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:18:48 crc kubenswrapper[4768]: I1124 18:18:48.489375 4768 scope.go:117] "RemoveContainer" containerID="063d5fbc59089ff4c162d5f41af969da5242ecc18d9be1f3016a39eca1a84236" Nov 24 18:18:48 crc kubenswrapper[4768]: I1124 18:18:48.534018 4768 scope.go:117] "RemoveContainer" containerID="61ad80c7b16201ebc7978162933c85bad525704b18e4eceba5dfcbba71d8d6d3" Nov 24 18:18:48 crc kubenswrapper[4768]: I1124 18:18:48.588417 4768 scope.go:117] "RemoveContainer" containerID="036a387e45fa1462799505d6a5283c044788f73c5440c68bce8b7cd57ff2299b" Nov 24 18:19:00 crc kubenswrapper[4768]: I1124 18:19:00.898852 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:19:00 crc kubenswrapper[4768]: E1124 18:19:00.900016 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:19:15 crc kubenswrapper[4768]: I1124 18:19:15.898130 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:19:16 crc kubenswrapper[4768]: E1124 18:19:15.899007 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:19:24 crc kubenswrapper[4768]: I1124 18:19:24.038755 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jmsvx"] Nov 24 18:19:24 crc kubenswrapper[4768]: I1124 18:19:24.046246 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jmsvx"] Nov 24 18:19:25 crc kubenswrapper[4768]: I1124 18:19:25.909841 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59de83f1-13f3-416c-a538-008cc9fb6d76" path="/var/lib/kubelet/pods/59de83f1-13f3-416c-a538-008cc9fb6d76/volumes" Nov 24 18:19:26 crc kubenswrapper[4768]: I1124 18:19:26.898379 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:19:26 crc kubenswrapper[4768]: E1124 18:19:26.898895 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:19:38 crc kubenswrapper[4768]: I1124 18:19:38.898973 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:19:38 crc kubenswrapper[4768]: E1124 18:19:38.899779 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:19:48 crc kubenswrapper[4768]: I1124 18:19:48.683658 4768 scope.go:117] "RemoveContainer" containerID="0099633bf76be402d5195442a63d0ef90ffd809bf304d2248759712ff4b9006c" Nov 24 18:19:53 crc kubenswrapper[4768]: I1124 18:19:53.898062 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:19:53 crc kubenswrapper[4768]: E1124 18:19:53.898950 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.196124 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2n6c7"] Nov 24 18:19:59 crc kubenswrapper[4768]: E1124 18:19:59.196978 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.196991 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.197234 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.198571 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.216762 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n6c7"] Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.308347 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dqbd\" (UniqueName: \"kubernetes.io/projected/72d79dba-8ccf-47ec-8acb-a02ac7498296-kube-api-access-5dqbd\") pod \"redhat-marketplace-2n6c7\" (UID: \"72d79dba-8ccf-47ec-8acb-a02ac7498296\") " pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.308404 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d79dba-8ccf-47ec-8acb-a02ac7498296-catalog-content\") pod \"redhat-marketplace-2n6c7\" (UID: \"72d79dba-8ccf-47ec-8acb-a02ac7498296\") " pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.308947 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d79dba-8ccf-47ec-8acb-a02ac7498296-utilities\") pod \"redhat-marketplace-2n6c7\" (UID: \"72d79dba-8ccf-47ec-8acb-a02ac7498296\") " pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.411680 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dqbd\" (UniqueName: \"kubernetes.io/projected/72d79dba-8ccf-47ec-8acb-a02ac7498296-kube-api-access-5dqbd\") pod \"redhat-marketplace-2n6c7\" (UID: \"72d79dba-8ccf-47ec-8acb-a02ac7498296\") " pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.411743 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d79dba-8ccf-47ec-8acb-a02ac7498296-catalog-content\") pod \"redhat-marketplace-2n6c7\" (UID: \"72d79dba-8ccf-47ec-8acb-a02ac7498296\") " pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.411820 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d79dba-8ccf-47ec-8acb-a02ac7498296-utilities\") pod \"redhat-marketplace-2n6c7\" (UID: \"72d79dba-8ccf-47ec-8acb-a02ac7498296\") " pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.412418 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d79dba-8ccf-47ec-8acb-a02ac7498296-catalog-content\") pod \"redhat-marketplace-2n6c7\" (UID: \"72d79dba-8ccf-47ec-8acb-a02ac7498296\") " pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.412438 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d79dba-8ccf-47ec-8acb-a02ac7498296-utilities\") pod \"redhat-marketplace-2n6c7\" (UID: \"72d79dba-8ccf-47ec-8acb-a02ac7498296\") " pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.434525 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dqbd\" (UniqueName: \"kubernetes.io/projected/72d79dba-8ccf-47ec-8acb-a02ac7498296-kube-api-access-5dqbd\") pod \"redhat-marketplace-2n6c7\" (UID: \"72d79dba-8ccf-47ec-8acb-a02ac7498296\") " pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.520525 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:19:59 crc kubenswrapper[4768]: I1124 18:19:59.986791 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n6c7"] Nov 24 18:20:00 crc kubenswrapper[4768]: I1124 18:20:00.189413 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6c7" event={"ID":"72d79dba-8ccf-47ec-8acb-a02ac7498296","Type":"ContainerStarted","Data":"fd47d8783e4c26174d923a24b6da4f0f6dca158f7a874571a3ebce18fd326a96"} Nov 24 18:20:01 crc kubenswrapper[4768]: I1124 18:20:01.200448 4768 generic.go:334] "Generic (PLEG): container finished" podID="72d79dba-8ccf-47ec-8acb-a02ac7498296" containerID="865ffc6edd32b058cbf1cd9f0f914e1998633f498b1ffe7a08eacbc95617fe08" exitCode=0 Nov 24 18:20:01 crc kubenswrapper[4768]: I1124 18:20:01.200761 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6c7" event={"ID":"72d79dba-8ccf-47ec-8acb-a02ac7498296","Type":"ContainerDied","Data":"865ffc6edd32b058cbf1cd9f0f914e1998633f498b1ffe7a08eacbc95617fe08"} Nov 24 18:20:03 crc kubenswrapper[4768]: I1124 18:20:03.222195 4768 generic.go:334] "Generic (PLEG): container finished" podID="72d79dba-8ccf-47ec-8acb-a02ac7498296" containerID="5bae334c48b7d9de21e774df1bfa07b20afc2c0d375c05e66b133a4105f1a66c" exitCode=0 Nov 24 18:20:03 crc kubenswrapper[4768]: I1124 18:20:03.222299 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6c7" event={"ID":"72d79dba-8ccf-47ec-8acb-a02ac7498296","Type":"ContainerDied","Data":"5bae334c48b7d9de21e774df1bfa07b20afc2c0d375c05e66b133a4105f1a66c"} Nov 24 18:20:04 crc kubenswrapper[4768]: I1124 18:20:04.233061 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6c7" event={"ID":"72d79dba-8ccf-47ec-8acb-a02ac7498296","Type":"ContainerStarted","Data":"8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50"} Nov 24 18:20:04 crc kubenswrapper[4768]: I1124 18:20:04.256336 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2n6c7" podStartSLOduration=2.803717937 podStartE2EDuration="5.256315893s" podCreationTimestamp="2025-11-24 18:19:59 +0000 UTC" firstStartedPulling="2025-11-24 18:20:01.202166756 +0000 UTC m=+1840.062748533" lastFinishedPulling="2025-11-24 18:20:03.654764712 +0000 UTC m=+1842.515346489" observedRunningTime="2025-11-24 18:20:04.24920921 +0000 UTC m=+1843.109790987" watchObservedRunningTime="2025-11-24 18:20:04.256315893 +0000 UTC m=+1843.116897670" Nov 24 18:20:05 crc kubenswrapper[4768]: I1124 18:20:05.898936 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:20:05 crc kubenswrapper[4768]: E1124 18:20:05.899326 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:20:09 crc kubenswrapper[4768]: I1124 18:20:09.521031 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:20:09 crc kubenswrapper[4768]: I1124 18:20:09.521372 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:20:09 crc kubenswrapper[4768]: I1124 18:20:09.565008 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:20:10 crc kubenswrapper[4768]: I1124 18:20:10.339240 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:20:10 crc kubenswrapper[4768]: I1124 18:20:10.386334 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n6c7"] Nov 24 18:20:12 crc kubenswrapper[4768]: I1124 18:20:12.306299 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2n6c7" podUID="72d79dba-8ccf-47ec-8acb-a02ac7498296" containerName="registry-server" containerID="cri-o://8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50" gracePeriod=2 Nov 24 18:20:12 crc kubenswrapper[4768]: I1124 18:20:12.724884 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:20:12 crc kubenswrapper[4768]: I1124 18:20:12.866544 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d79dba-8ccf-47ec-8acb-a02ac7498296-catalog-content\") pod \"72d79dba-8ccf-47ec-8acb-a02ac7498296\" (UID: \"72d79dba-8ccf-47ec-8acb-a02ac7498296\") " Nov 24 18:20:12 crc kubenswrapper[4768]: I1124 18:20:12.866638 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d79dba-8ccf-47ec-8acb-a02ac7498296-utilities\") pod \"72d79dba-8ccf-47ec-8acb-a02ac7498296\" (UID: \"72d79dba-8ccf-47ec-8acb-a02ac7498296\") " Nov 24 18:20:12 crc kubenswrapper[4768]: I1124 18:20:12.866700 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dqbd\" (UniqueName: \"kubernetes.io/projected/72d79dba-8ccf-47ec-8acb-a02ac7498296-kube-api-access-5dqbd\") pod \"72d79dba-8ccf-47ec-8acb-a02ac7498296\" (UID: \"72d79dba-8ccf-47ec-8acb-a02ac7498296\") " Nov 24 18:20:12 crc kubenswrapper[4768]: I1124 18:20:12.867744 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72d79dba-8ccf-47ec-8acb-a02ac7498296-utilities" (OuterVolumeSpecName: "utilities") pod "72d79dba-8ccf-47ec-8acb-a02ac7498296" (UID: "72d79dba-8ccf-47ec-8acb-a02ac7498296"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:20:12 crc kubenswrapper[4768]: I1124 18:20:12.873379 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d79dba-8ccf-47ec-8acb-a02ac7498296-kube-api-access-5dqbd" (OuterVolumeSpecName: "kube-api-access-5dqbd") pod "72d79dba-8ccf-47ec-8acb-a02ac7498296" (UID: "72d79dba-8ccf-47ec-8acb-a02ac7498296"). InnerVolumeSpecName "kube-api-access-5dqbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:20:12 crc kubenswrapper[4768]: I1124 18:20:12.884355 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72d79dba-8ccf-47ec-8acb-a02ac7498296-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72d79dba-8ccf-47ec-8acb-a02ac7498296" (UID: "72d79dba-8ccf-47ec-8acb-a02ac7498296"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:20:12 crc kubenswrapper[4768]: I1124 18:20:12.968685 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72d79dba-8ccf-47ec-8acb-a02ac7498296-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:20:12 crc kubenswrapper[4768]: I1124 18:20:12.968964 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dqbd\" (UniqueName: \"kubernetes.io/projected/72d79dba-8ccf-47ec-8acb-a02ac7498296-kube-api-access-5dqbd\") on node \"crc\" DevicePath \"\"" Nov 24 18:20:12 crc kubenswrapper[4768]: I1124 18:20:12.969050 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72d79dba-8ccf-47ec-8acb-a02ac7498296-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.316981 4768 generic.go:334] "Generic (PLEG): container finished" podID="72d79dba-8ccf-47ec-8acb-a02ac7498296" containerID="8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50" exitCode=0 Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.317032 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6c7" event={"ID":"72d79dba-8ccf-47ec-8acb-a02ac7498296","Type":"ContainerDied","Data":"8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50"} Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.317067 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6c7" event={"ID":"72d79dba-8ccf-47ec-8acb-a02ac7498296","Type":"ContainerDied","Data":"fd47d8783e4c26174d923a24b6da4f0f6dca158f7a874571a3ebce18fd326a96"} Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.317091 4768 scope.go:117] "RemoveContainer" containerID="8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50" Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.317116 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2n6c7" Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.337188 4768 scope.go:117] "RemoveContainer" containerID="5bae334c48b7d9de21e774df1bfa07b20afc2c0d375c05e66b133a4105f1a66c" Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.356168 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n6c7"] Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.367038 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n6c7"] Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.375791 4768 scope.go:117] "RemoveContainer" containerID="865ffc6edd32b058cbf1cd9f0f914e1998633f498b1ffe7a08eacbc95617fe08" Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.406437 4768 scope.go:117] "RemoveContainer" containerID="8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50" Nov 24 18:20:13 crc kubenswrapper[4768]: E1124 18:20:13.407596 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50\": container with ID starting with 8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50 not found: ID does not exist" containerID="8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50" Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.407656 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50"} err="failed to get container status \"8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50\": rpc error: code = NotFound desc = could not find container \"8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50\": container with ID starting with 8dac235f2699b2a51b5a5c4f25716db78f2661c75bb13b67a118c50b00994f50 not found: ID does not exist" Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.407688 4768 scope.go:117] "RemoveContainer" containerID="5bae334c48b7d9de21e774df1bfa07b20afc2c0d375c05e66b133a4105f1a66c" Nov 24 18:20:13 crc kubenswrapper[4768]: E1124 18:20:13.408013 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bae334c48b7d9de21e774df1bfa07b20afc2c0d375c05e66b133a4105f1a66c\": container with ID starting with 5bae334c48b7d9de21e774df1bfa07b20afc2c0d375c05e66b133a4105f1a66c not found: ID does not exist" containerID="5bae334c48b7d9de21e774df1bfa07b20afc2c0d375c05e66b133a4105f1a66c" Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.408045 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bae334c48b7d9de21e774df1bfa07b20afc2c0d375c05e66b133a4105f1a66c"} err="failed to get container status \"5bae334c48b7d9de21e774df1bfa07b20afc2c0d375c05e66b133a4105f1a66c\": rpc error: code = NotFound desc = could not find container \"5bae334c48b7d9de21e774df1bfa07b20afc2c0d375c05e66b133a4105f1a66c\": container with ID starting with 5bae334c48b7d9de21e774df1bfa07b20afc2c0d375c05e66b133a4105f1a66c not found: ID does not exist" Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.408072 4768 scope.go:117] "RemoveContainer" containerID="865ffc6edd32b058cbf1cd9f0f914e1998633f498b1ffe7a08eacbc95617fe08" Nov 24 18:20:13 crc kubenswrapper[4768]: E1124 18:20:13.408319 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"865ffc6edd32b058cbf1cd9f0f914e1998633f498b1ffe7a08eacbc95617fe08\": container with ID starting with 865ffc6edd32b058cbf1cd9f0f914e1998633f498b1ffe7a08eacbc95617fe08 not found: ID does not exist" containerID="865ffc6edd32b058cbf1cd9f0f914e1998633f498b1ffe7a08eacbc95617fe08" Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.408367 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"865ffc6edd32b058cbf1cd9f0f914e1998633f498b1ffe7a08eacbc95617fe08"} err="failed to get container status \"865ffc6edd32b058cbf1cd9f0f914e1998633f498b1ffe7a08eacbc95617fe08\": rpc error: code = NotFound desc = could not find container \"865ffc6edd32b058cbf1cd9f0f914e1998633f498b1ffe7a08eacbc95617fe08\": container with ID starting with 865ffc6edd32b058cbf1cd9f0f914e1998633f498b1ffe7a08eacbc95617fe08 not found: ID does not exist" Nov 24 18:20:13 crc kubenswrapper[4768]: I1124 18:20:13.912840 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d79dba-8ccf-47ec-8acb-a02ac7498296" path="/var/lib/kubelet/pods/72d79dba-8ccf-47ec-8acb-a02ac7498296/volumes" Nov 24 18:20:16 crc kubenswrapper[4768]: I1124 18:20:16.898741 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:20:17 crc kubenswrapper[4768]: I1124 18:20:17.349172 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"f1549e11399ec1dbcd19fbf82542cd22671bbfb667e034644ac3a36883dd42c3"} Nov 24 18:22:43 crc kubenswrapper[4768]: I1124 18:22:43.656272 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:22:43 crc kubenswrapper[4768]: I1124 18:22:43.656867 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.580515 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.598243 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.607864 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-95vn5"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.616297 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.622260 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-g7dw8"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.630307 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2jrcd"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.639029 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-95vn5"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.645033 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.654796 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.664229 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vs564"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.672989 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.680349 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.686371 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zqmdh"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.691703 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mzt5v"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.710346 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.719877 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.727001 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vhwwn"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.733268 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-jzrqp"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.741405 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bmmvg"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.748580 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fhd5c"] Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.912402 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f85e24-4898-4ff4-8fca-995a0a85ad6e" path="/var/lib/kubelet/pods/28f85e24-4898-4ff4-8fca-995a0a85ad6e/volumes" Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.914636 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849" path="/var/lib/kubelet/pods/6c7ac3ba-8436-4a5b-8da6-44b1ba7ea849/volumes" Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.916448 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f9e8fa-639a-4ac6-9a56-437263b9f342" path="/var/lib/kubelet/pods/80f9e8fa-639a-4ac6-9a56-437263b9f342/volumes" Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.918011 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88b5568c-b02c-4dc8-a356-a22d9f5815b8" path="/var/lib/kubelet/pods/88b5568c-b02c-4dc8-a356-a22d9f5815b8/volumes" Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.919648 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9036b15a-a981-414b-bb2f-dfc6c951f45a" path="/var/lib/kubelet/pods/9036b15a-a981-414b-bb2f-dfc6c951f45a/volumes" Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.920794 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="991cdac9-8e35-4e4d-bba0-f1aa5cb5981e" path="/var/lib/kubelet/pods/991cdac9-8e35-4e4d-bba0-f1aa5cb5981e/volumes" Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.921952 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba" path="/var/lib/kubelet/pods/9fb7e9a8-aa78-44c2-b400-59a4ca60e4ba/volumes" Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.923676 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8027614-458f-4bf6-a0fd-931723d17b8c" path="/var/lib/kubelet/pods/c8027614-458f-4bf6-a0fd-931723d17b8c/volumes" Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.924659 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e810de0d-60fb-474c-ba3f-1dd7f4cbc445" path="/var/lib/kubelet/pods/e810de0d-60fb-474c-ba3f-1dd7f4cbc445/volumes" Nov 24 18:22:51 crc kubenswrapper[4768]: I1124 18:22:51.925687 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea64d47-cdaf-4b62-906f-914aa42a9e60" path="/var/lib/kubelet/pods/eea64d47-cdaf-4b62-906f-914aa42a9e60/volumes" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.523969 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b"] Nov 24 18:22:57 crc kubenswrapper[4768]: E1124 18:22:57.524881 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d79dba-8ccf-47ec-8acb-a02ac7498296" containerName="registry-server" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.524896 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d79dba-8ccf-47ec-8acb-a02ac7498296" containerName="registry-server" Nov 24 18:22:57 crc kubenswrapper[4768]: E1124 18:22:57.524910 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d79dba-8ccf-47ec-8acb-a02ac7498296" containerName="extract-utilities" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.524916 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d79dba-8ccf-47ec-8acb-a02ac7498296" containerName="extract-utilities" Nov 24 18:22:57 crc kubenswrapper[4768]: E1124 18:22:57.524944 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d79dba-8ccf-47ec-8acb-a02ac7498296" containerName="extract-content" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.524951 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d79dba-8ccf-47ec-8acb-a02ac7498296" containerName="extract-content" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.525116 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d79dba-8ccf-47ec-8acb-a02ac7498296" containerName="registry-server" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.525761 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.540214 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.540214 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.540322 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.540356 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.540426 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.542246 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b"] Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.571316 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.571449 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.571537 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trf2h\" (UniqueName: \"kubernetes.io/projected/e2f4a9fd-b80f-44d1-80b8-298119d3b967-kube-api-access-trf2h\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.571576 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.571601 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.673435 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.673524 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trf2h\" (UniqueName: \"kubernetes.io/projected/e2f4a9fd-b80f-44d1-80b8-298119d3b967-kube-api-access-trf2h\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.673556 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.673578 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.673627 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.679301 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.679814 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.680206 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.680908 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.691615 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trf2h\" (UniqueName: \"kubernetes.io/projected/e2f4a9fd-b80f-44d1-80b8-298119d3b967-kube-api-access-trf2h\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:57 crc kubenswrapper[4768]: I1124 18:22:57.856195 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:22:58 crc kubenswrapper[4768]: I1124 18:22:58.400072 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b"] Nov 24 18:22:58 crc kubenswrapper[4768]: I1124 18:22:58.403770 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 18:22:59 crc kubenswrapper[4768]: I1124 18:22:59.229468 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" event={"ID":"e2f4a9fd-b80f-44d1-80b8-298119d3b967","Type":"ContainerStarted","Data":"619146be9d546cb3659ce53d0f310dd69e300aa317ecebcc2d8f718a658b0fb2"} Nov 24 18:22:59 crc kubenswrapper[4768]: I1124 18:22:59.229539 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" event={"ID":"e2f4a9fd-b80f-44d1-80b8-298119d3b967","Type":"ContainerStarted","Data":"484b4a9e4cdde88d971a2e0a9e4237649b541549dde48d95b275f403bf44a9c3"} Nov 24 18:22:59 crc kubenswrapper[4768]: I1124 18:22:59.250301 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" podStartSLOduration=1.752244053 podStartE2EDuration="2.250282239s" podCreationTimestamp="2025-11-24 18:22:57 +0000 UTC" firstStartedPulling="2025-11-24 18:22:58.403559317 +0000 UTC m=+2017.264141094" lastFinishedPulling="2025-11-24 18:22:58.901597463 +0000 UTC m=+2017.762179280" observedRunningTime="2025-11-24 18:22:59.246068384 +0000 UTC m=+2018.106650181" watchObservedRunningTime="2025-11-24 18:22:59.250282239 +0000 UTC m=+2018.110864026" Nov 24 18:23:10 crc kubenswrapper[4768]: I1124 18:23:10.333556 4768 generic.go:334] "Generic (PLEG): container finished" podID="e2f4a9fd-b80f-44d1-80b8-298119d3b967" containerID="619146be9d546cb3659ce53d0f310dd69e300aa317ecebcc2d8f718a658b0fb2" exitCode=0 Nov 24 18:23:10 crc kubenswrapper[4768]: I1124 18:23:10.333667 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" event={"ID":"e2f4a9fd-b80f-44d1-80b8-298119d3b967","Type":"ContainerDied","Data":"619146be9d546cb3659ce53d0f310dd69e300aa317ecebcc2d8f718a658b0fb2"} Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.733897 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.838781 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-repo-setup-combined-ca-bundle\") pod \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.838818 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-inventory\") pod \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.838846 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trf2h\" (UniqueName: \"kubernetes.io/projected/e2f4a9fd-b80f-44d1-80b8-298119d3b967-kube-api-access-trf2h\") pod \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.838945 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-ssh-key\") pod \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.838987 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-ceph\") pod \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\" (UID: \"e2f4a9fd-b80f-44d1-80b8-298119d3b967\") " Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.844058 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-ceph" (OuterVolumeSpecName: "ceph") pod "e2f4a9fd-b80f-44d1-80b8-298119d3b967" (UID: "e2f4a9fd-b80f-44d1-80b8-298119d3b967"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.845876 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f4a9fd-b80f-44d1-80b8-298119d3b967-kube-api-access-trf2h" (OuterVolumeSpecName: "kube-api-access-trf2h") pod "e2f4a9fd-b80f-44d1-80b8-298119d3b967" (UID: "e2f4a9fd-b80f-44d1-80b8-298119d3b967"). InnerVolumeSpecName "kube-api-access-trf2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.846932 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e2f4a9fd-b80f-44d1-80b8-298119d3b967" (UID: "e2f4a9fd-b80f-44d1-80b8-298119d3b967"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.869185 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-inventory" (OuterVolumeSpecName: "inventory") pod "e2f4a9fd-b80f-44d1-80b8-298119d3b967" (UID: "e2f4a9fd-b80f-44d1-80b8-298119d3b967"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.869886 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e2f4a9fd-b80f-44d1-80b8-298119d3b967" (UID: "e2f4a9fd-b80f-44d1-80b8-298119d3b967"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.941460 4768 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.941503 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.941513 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trf2h\" (UniqueName: \"kubernetes.io/projected/e2f4a9fd-b80f-44d1-80b8-298119d3b967-kube-api-access-trf2h\") on node \"crc\" DevicePath \"\"" Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.941524 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:23:11 crc kubenswrapper[4768]: I1124 18:23:11.941534 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2f4a9fd-b80f-44d1-80b8-298119d3b967-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.353869 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" event={"ID":"e2f4a9fd-b80f-44d1-80b8-298119d3b967","Type":"ContainerDied","Data":"484b4a9e4cdde88d971a2e0a9e4237649b541549dde48d95b275f403bf44a9c3"} Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.353913 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="484b4a9e4cdde88d971a2e0a9e4237649b541549dde48d95b275f403bf44a9c3" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.353925 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.420756 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx"] Nov 24 18:23:12 crc kubenswrapper[4768]: E1124 18:23:12.421151 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f4a9fd-b80f-44d1-80b8-298119d3b967" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.421172 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f4a9fd-b80f-44d1-80b8-298119d3b967" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.421369 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f4a9fd-b80f-44d1-80b8-298119d3b967" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.421964 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.427369 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.427635 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.427894 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.428835 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.428844 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.435402 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx"] Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.550713 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.550810 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.550836 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlqjl\" (UniqueName: \"kubernetes.io/projected/03a4429e-4032-4d71-adc7-7257ac152323-kube-api-access-rlqjl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.550880 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.551024 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.653071 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.653139 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlqjl\" (UniqueName: \"kubernetes.io/projected/03a4429e-4032-4d71-adc7-7257ac152323-kube-api-access-rlqjl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.653236 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.653300 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.653434 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.657299 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.657995 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.658383 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.659373 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.670427 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlqjl\" (UniqueName: \"kubernetes.io/projected/03a4429e-4032-4d71-adc7-7257ac152323-kube-api-access-rlqjl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:12 crc kubenswrapper[4768]: I1124 18:23:12.749075 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:23:13 crc kubenswrapper[4768]: I1124 18:23:13.245202 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx"] Nov 24 18:23:13 crc kubenswrapper[4768]: W1124 18:23:13.256532 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03a4429e_4032_4d71_adc7_7257ac152323.slice/crio-7cddb0d4860a559bb2334c1a504dd2381e491b0e857cc7f048d69927acdff614 WatchSource:0}: Error finding container 7cddb0d4860a559bb2334c1a504dd2381e491b0e857cc7f048d69927acdff614: Status 404 returned error can't find the container with id 7cddb0d4860a559bb2334c1a504dd2381e491b0e857cc7f048d69927acdff614 Nov 24 18:23:13 crc kubenswrapper[4768]: I1124 18:23:13.364297 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" event={"ID":"03a4429e-4032-4d71-adc7-7257ac152323","Type":"ContainerStarted","Data":"7cddb0d4860a559bb2334c1a504dd2381e491b0e857cc7f048d69927acdff614"} Nov 24 18:23:13 crc kubenswrapper[4768]: I1124 18:23:13.655983 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:23:13 crc kubenswrapper[4768]: I1124 18:23:13.656042 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:23:14 crc kubenswrapper[4768]: I1124 18:23:14.377981 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" event={"ID":"03a4429e-4032-4d71-adc7-7257ac152323","Type":"ContainerStarted","Data":"3323293d2bb2d29c5dce33e1f5eec9f7fd3c4138dfa642212aa94a1e8caf5c43"} Nov 24 18:23:14 crc kubenswrapper[4768]: I1124 18:23:14.410716 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" podStartSLOduration=2.031607629 podStartE2EDuration="2.410693829s" podCreationTimestamp="2025-11-24 18:23:12 +0000 UTC" firstStartedPulling="2025-11-24 18:23:13.26094463 +0000 UTC m=+2032.121526407" lastFinishedPulling="2025-11-24 18:23:13.64003078 +0000 UTC m=+2032.500612607" observedRunningTime="2025-11-24 18:23:14.40334061 +0000 UTC m=+2033.263922417" watchObservedRunningTime="2025-11-24 18:23:14.410693829 +0000 UTC m=+2033.271275616" Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.561977 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-djlww"] Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.565241 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.597357 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-djlww"] Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.710658 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8349500-cad9-4c8f-b139-019fc2d196ed-catalog-content\") pod \"community-operators-djlww\" (UID: \"a8349500-cad9-4c8f-b139-019fc2d196ed\") " pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.710811 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bvd4\" (UniqueName: \"kubernetes.io/projected/a8349500-cad9-4c8f-b139-019fc2d196ed-kube-api-access-2bvd4\") pod \"community-operators-djlww\" (UID: \"a8349500-cad9-4c8f-b139-019fc2d196ed\") " pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.710840 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8349500-cad9-4c8f-b139-019fc2d196ed-utilities\") pod \"community-operators-djlww\" (UID: \"a8349500-cad9-4c8f-b139-019fc2d196ed\") " pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.812879 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8349500-cad9-4c8f-b139-019fc2d196ed-catalog-content\") pod \"community-operators-djlww\" (UID: \"a8349500-cad9-4c8f-b139-019fc2d196ed\") " pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.812997 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bvd4\" (UniqueName: \"kubernetes.io/projected/a8349500-cad9-4c8f-b139-019fc2d196ed-kube-api-access-2bvd4\") pod \"community-operators-djlww\" (UID: \"a8349500-cad9-4c8f-b139-019fc2d196ed\") " pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.813020 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8349500-cad9-4c8f-b139-019fc2d196ed-utilities\") pod \"community-operators-djlww\" (UID: \"a8349500-cad9-4c8f-b139-019fc2d196ed\") " pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.813760 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8349500-cad9-4c8f-b139-019fc2d196ed-utilities\") pod \"community-operators-djlww\" (UID: \"a8349500-cad9-4c8f-b139-019fc2d196ed\") " pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.813936 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8349500-cad9-4c8f-b139-019fc2d196ed-catalog-content\") pod \"community-operators-djlww\" (UID: \"a8349500-cad9-4c8f-b139-019fc2d196ed\") " pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.832407 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bvd4\" (UniqueName: \"kubernetes.io/projected/a8349500-cad9-4c8f-b139-019fc2d196ed-kube-api-access-2bvd4\") pod \"community-operators-djlww\" (UID: \"a8349500-cad9-4c8f-b139-019fc2d196ed\") " pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:25 crc kubenswrapper[4768]: I1124 18:23:25.947566 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:26 crc kubenswrapper[4768]: I1124 18:23:26.444357 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-djlww"] Nov 24 18:23:26 crc kubenswrapper[4768]: W1124 18:23:26.451037 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8349500_cad9_4c8f_b139_019fc2d196ed.slice/crio-8f892bd0213c46bf4d9e1d4c9168cba2057e0683fc3b7291715caea9ce919474 WatchSource:0}: Error finding container 8f892bd0213c46bf4d9e1d4c9168cba2057e0683fc3b7291715caea9ce919474: Status 404 returned error can't find the container with id 8f892bd0213c46bf4d9e1d4c9168cba2057e0683fc3b7291715caea9ce919474 Nov 24 18:23:26 crc kubenswrapper[4768]: I1124 18:23:26.483758 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djlww" event={"ID":"a8349500-cad9-4c8f-b139-019fc2d196ed","Type":"ContainerStarted","Data":"8f892bd0213c46bf4d9e1d4c9168cba2057e0683fc3b7291715caea9ce919474"} Nov 24 18:23:27 crc kubenswrapper[4768]: I1124 18:23:27.494274 4768 generic.go:334] "Generic (PLEG): container finished" podID="a8349500-cad9-4c8f-b139-019fc2d196ed" containerID="7592e22f3cb714e5d487cc5eb353b39c7c0ea1f850ad43ca1eaa245ac300c75e" exitCode=0 Nov 24 18:23:27 crc kubenswrapper[4768]: I1124 18:23:27.494411 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djlww" event={"ID":"a8349500-cad9-4c8f-b139-019fc2d196ed","Type":"ContainerDied","Data":"7592e22f3cb714e5d487cc5eb353b39c7c0ea1f850ad43ca1eaa245ac300c75e"} Nov 24 18:23:28 crc kubenswrapper[4768]: I1124 18:23:28.505021 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djlww" event={"ID":"a8349500-cad9-4c8f-b139-019fc2d196ed","Type":"ContainerStarted","Data":"23c28f1f0618a137204b3bd52d4f3dc785c17650783ff8e4c50dd9f96763e99a"} Nov 24 18:23:29 crc kubenswrapper[4768]: I1124 18:23:29.516341 4768 generic.go:334] "Generic (PLEG): container finished" podID="a8349500-cad9-4c8f-b139-019fc2d196ed" containerID="23c28f1f0618a137204b3bd52d4f3dc785c17650783ff8e4c50dd9f96763e99a" exitCode=0 Nov 24 18:23:29 crc kubenswrapper[4768]: I1124 18:23:29.516411 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djlww" event={"ID":"a8349500-cad9-4c8f-b139-019fc2d196ed","Type":"ContainerDied","Data":"23c28f1f0618a137204b3bd52d4f3dc785c17650783ff8e4c50dd9f96763e99a"} Nov 24 18:23:30 crc kubenswrapper[4768]: I1124 18:23:30.532068 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djlww" event={"ID":"a8349500-cad9-4c8f-b139-019fc2d196ed","Type":"ContainerStarted","Data":"5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c"} Nov 24 18:23:30 crc kubenswrapper[4768]: I1124 18:23:30.557980 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-djlww" podStartSLOduration=3.131584185 podStartE2EDuration="5.557951944s" podCreationTimestamp="2025-11-24 18:23:25 +0000 UTC" firstStartedPulling="2025-11-24 18:23:27.49601666 +0000 UTC m=+2046.356598437" lastFinishedPulling="2025-11-24 18:23:29.922384419 +0000 UTC m=+2048.782966196" observedRunningTime="2025-11-24 18:23:30.550504732 +0000 UTC m=+2049.411086509" watchObservedRunningTime="2025-11-24 18:23:30.557951944 +0000 UTC m=+2049.418533761" Nov 24 18:23:35 crc kubenswrapper[4768]: I1124 18:23:35.948635 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:35 crc kubenswrapper[4768]: I1124 18:23:35.949103 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:36 crc kubenswrapper[4768]: I1124 18:23:36.001483 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:36 crc kubenswrapper[4768]: I1124 18:23:36.657410 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:36 crc kubenswrapper[4768]: I1124 18:23:36.721768 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-djlww"] Nov 24 18:23:38 crc kubenswrapper[4768]: I1124 18:23:38.616895 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-djlww" podUID="a8349500-cad9-4c8f-b139-019fc2d196ed" containerName="registry-server" containerID="cri-o://5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c" gracePeriod=2 Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.044563 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.167010 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8349500-cad9-4c8f-b139-019fc2d196ed-catalog-content\") pod \"a8349500-cad9-4c8f-b139-019fc2d196ed\" (UID: \"a8349500-cad9-4c8f-b139-019fc2d196ed\") " Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.167165 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8349500-cad9-4c8f-b139-019fc2d196ed-utilities\") pod \"a8349500-cad9-4c8f-b139-019fc2d196ed\" (UID: \"a8349500-cad9-4c8f-b139-019fc2d196ed\") " Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.167375 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bvd4\" (UniqueName: \"kubernetes.io/projected/a8349500-cad9-4c8f-b139-019fc2d196ed-kube-api-access-2bvd4\") pod \"a8349500-cad9-4c8f-b139-019fc2d196ed\" (UID: \"a8349500-cad9-4c8f-b139-019fc2d196ed\") " Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.167958 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8349500-cad9-4c8f-b139-019fc2d196ed-utilities" (OuterVolumeSpecName: "utilities") pod "a8349500-cad9-4c8f-b139-019fc2d196ed" (UID: "a8349500-cad9-4c8f-b139-019fc2d196ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.172984 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8349500-cad9-4c8f-b139-019fc2d196ed-kube-api-access-2bvd4" (OuterVolumeSpecName: "kube-api-access-2bvd4") pod "a8349500-cad9-4c8f-b139-019fc2d196ed" (UID: "a8349500-cad9-4c8f-b139-019fc2d196ed"). InnerVolumeSpecName "kube-api-access-2bvd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.219637 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8349500-cad9-4c8f-b139-019fc2d196ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8349500-cad9-4c8f-b139-019fc2d196ed" (UID: "a8349500-cad9-4c8f-b139-019fc2d196ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.269642 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8349500-cad9-4c8f-b139-019fc2d196ed-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.269709 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bvd4\" (UniqueName: \"kubernetes.io/projected/a8349500-cad9-4c8f-b139-019fc2d196ed-kube-api-access-2bvd4\") on node \"crc\" DevicePath \"\"" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.269725 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8349500-cad9-4c8f-b139-019fc2d196ed-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.628406 4768 generic.go:334] "Generic (PLEG): container finished" podID="a8349500-cad9-4c8f-b139-019fc2d196ed" containerID="5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c" exitCode=0 Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.628448 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djlww" event={"ID":"a8349500-cad9-4c8f-b139-019fc2d196ed","Type":"ContainerDied","Data":"5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c"} Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.628465 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-djlww" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.629351 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djlww" event={"ID":"a8349500-cad9-4c8f-b139-019fc2d196ed","Type":"ContainerDied","Data":"8f892bd0213c46bf4d9e1d4c9168cba2057e0683fc3b7291715caea9ce919474"} Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.629380 4768 scope.go:117] "RemoveContainer" containerID="5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.654265 4768 scope.go:117] "RemoveContainer" containerID="23c28f1f0618a137204b3bd52d4f3dc785c17650783ff8e4c50dd9f96763e99a" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.666957 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-djlww"] Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.684171 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-djlww"] Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.687562 4768 scope.go:117] "RemoveContainer" containerID="7592e22f3cb714e5d487cc5eb353b39c7c0ea1f850ad43ca1eaa245ac300c75e" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.722760 4768 scope.go:117] "RemoveContainer" containerID="5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c" Nov 24 18:23:39 crc kubenswrapper[4768]: E1124 18:23:39.723411 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c\": container with ID starting with 5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c not found: ID does not exist" containerID="5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.723448 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c"} err="failed to get container status \"5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c\": rpc error: code = NotFound desc = could not find container \"5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c\": container with ID starting with 5820a76ce458cb5057867b966843947c4041af96ee2daff0680e59e007b0a59c not found: ID does not exist" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.723475 4768 scope.go:117] "RemoveContainer" containerID="23c28f1f0618a137204b3bd52d4f3dc785c17650783ff8e4c50dd9f96763e99a" Nov 24 18:23:39 crc kubenswrapper[4768]: E1124 18:23:39.724394 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23c28f1f0618a137204b3bd52d4f3dc785c17650783ff8e4c50dd9f96763e99a\": container with ID starting with 23c28f1f0618a137204b3bd52d4f3dc785c17650783ff8e4c50dd9f96763e99a not found: ID does not exist" containerID="23c28f1f0618a137204b3bd52d4f3dc785c17650783ff8e4c50dd9f96763e99a" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.724420 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23c28f1f0618a137204b3bd52d4f3dc785c17650783ff8e4c50dd9f96763e99a"} err="failed to get container status \"23c28f1f0618a137204b3bd52d4f3dc785c17650783ff8e4c50dd9f96763e99a\": rpc error: code = NotFound desc = could not find container \"23c28f1f0618a137204b3bd52d4f3dc785c17650783ff8e4c50dd9f96763e99a\": container with ID starting with 23c28f1f0618a137204b3bd52d4f3dc785c17650783ff8e4c50dd9f96763e99a not found: ID does not exist" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.724434 4768 scope.go:117] "RemoveContainer" containerID="7592e22f3cb714e5d487cc5eb353b39c7c0ea1f850ad43ca1eaa245ac300c75e" Nov 24 18:23:39 crc kubenswrapper[4768]: E1124 18:23:39.724659 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7592e22f3cb714e5d487cc5eb353b39c7c0ea1f850ad43ca1eaa245ac300c75e\": container with ID starting with 7592e22f3cb714e5d487cc5eb353b39c7c0ea1f850ad43ca1eaa245ac300c75e not found: ID does not exist" containerID="7592e22f3cb714e5d487cc5eb353b39c7c0ea1f850ad43ca1eaa245ac300c75e" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.724676 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7592e22f3cb714e5d487cc5eb353b39c7c0ea1f850ad43ca1eaa245ac300c75e"} err="failed to get container status \"7592e22f3cb714e5d487cc5eb353b39c7c0ea1f850ad43ca1eaa245ac300c75e\": rpc error: code = NotFound desc = could not find container \"7592e22f3cb714e5d487cc5eb353b39c7c0ea1f850ad43ca1eaa245ac300c75e\": container with ID starting with 7592e22f3cb714e5d487cc5eb353b39c7c0ea1f850ad43ca1eaa245ac300c75e not found: ID does not exist" Nov 24 18:23:39 crc kubenswrapper[4768]: I1124 18:23:39.918307 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8349500-cad9-4c8f-b139-019fc2d196ed" path="/var/lib/kubelet/pods/a8349500-cad9-4c8f-b139-019fc2d196ed/volumes" Nov 24 18:23:43 crc kubenswrapper[4768]: I1124 18:23:43.656720 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:23:43 crc kubenswrapper[4768]: I1124 18:23:43.657131 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:23:43 crc kubenswrapper[4768]: I1124 18:23:43.657182 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 18:23:43 crc kubenswrapper[4768]: I1124 18:23:43.657724 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1549e11399ec1dbcd19fbf82542cd22671bbfb667e034644ac3a36883dd42c3"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 18:23:43 crc kubenswrapper[4768]: I1124 18:23:43.657779 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://f1549e11399ec1dbcd19fbf82542cd22671bbfb667e034644ac3a36883dd42c3" gracePeriod=600 Nov 24 18:23:44 crc kubenswrapper[4768]: I1124 18:23:44.677804 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="f1549e11399ec1dbcd19fbf82542cd22671bbfb667e034644ac3a36883dd42c3" exitCode=0 Nov 24 18:23:44 crc kubenswrapper[4768]: I1124 18:23:44.677851 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"f1549e11399ec1dbcd19fbf82542cd22671bbfb667e034644ac3a36883dd42c3"} Nov 24 18:23:44 crc kubenswrapper[4768]: I1124 18:23:44.678544 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59"} Nov 24 18:23:44 crc kubenswrapper[4768]: I1124 18:23:44.678579 4768 scope.go:117] "RemoveContainer" containerID="7145d8b18bda77f2f8d27587d995b3701f83fbf51d192432bfed62b3a21e2f2d" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.558294 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lsdlv"] Nov 24 18:23:46 crc kubenswrapper[4768]: E1124 18:23:46.559181 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8349500-cad9-4c8f-b139-019fc2d196ed" containerName="extract-content" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.559193 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8349500-cad9-4c8f-b139-019fc2d196ed" containerName="extract-content" Nov 24 18:23:46 crc kubenswrapper[4768]: E1124 18:23:46.559216 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8349500-cad9-4c8f-b139-019fc2d196ed" containerName="registry-server" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.559224 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8349500-cad9-4c8f-b139-019fc2d196ed" containerName="registry-server" Nov 24 18:23:46 crc kubenswrapper[4768]: E1124 18:23:46.559249 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8349500-cad9-4c8f-b139-019fc2d196ed" containerName="extract-utilities" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.559259 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8349500-cad9-4c8f-b139-019fc2d196ed" containerName="extract-utilities" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.559462 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8349500-cad9-4c8f-b139-019fc2d196ed" containerName="registry-server" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.560984 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.572857 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lsdlv"] Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.601985 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-utilities\") pod \"certified-operators-lsdlv\" (UID: \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\") " pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.602126 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbdlb\" (UniqueName: \"kubernetes.io/projected/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-kube-api-access-jbdlb\") pod \"certified-operators-lsdlv\" (UID: \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\") " pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.602217 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-catalog-content\") pod \"certified-operators-lsdlv\" (UID: \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\") " pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.703801 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-utilities\") pod \"certified-operators-lsdlv\" (UID: \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\") " pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.703977 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbdlb\" (UniqueName: \"kubernetes.io/projected/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-kube-api-access-jbdlb\") pod \"certified-operators-lsdlv\" (UID: \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\") " pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.704100 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-catalog-content\") pod \"certified-operators-lsdlv\" (UID: \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\") " pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.704748 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-catalog-content\") pod \"certified-operators-lsdlv\" (UID: \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\") " pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.704876 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-utilities\") pod \"certified-operators-lsdlv\" (UID: \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\") " pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.725391 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbdlb\" (UniqueName: \"kubernetes.io/projected/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-kube-api-access-jbdlb\") pod \"certified-operators-lsdlv\" (UID: \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\") " pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:46 crc kubenswrapper[4768]: I1124 18:23:46.882009 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:47 crc kubenswrapper[4768]: I1124 18:23:47.194141 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lsdlv"] Nov 24 18:23:47 crc kubenswrapper[4768]: I1124 18:23:47.709903 4768 generic.go:334] "Generic (PLEG): container finished" podID="cfd40897-142e-42d7-9c3f-7e2e4f4c304c" containerID="7d4bdd833fe0a5d1f53f6180b6fa3fac93c3f2cc4baca81b54aa9ad36e7f1835" exitCode=0 Nov 24 18:23:47 crc kubenswrapper[4768]: I1124 18:23:47.709957 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsdlv" event={"ID":"cfd40897-142e-42d7-9c3f-7e2e4f4c304c","Type":"ContainerDied","Data":"7d4bdd833fe0a5d1f53f6180b6fa3fac93c3f2cc4baca81b54aa9ad36e7f1835"} Nov 24 18:23:47 crc kubenswrapper[4768]: I1124 18:23:47.709993 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsdlv" event={"ID":"cfd40897-142e-42d7-9c3f-7e2e4f4c304c","Type":"ContainerStarted","Data":"77efed71139fc3e681564d56b3b1f9c27cbdc721dde6f0bbf5ece9ad39f69804"} Nov 24 18:23:48 crc kubenswrapper[4768]: I1124 18:23:48.822532 4768 scope.go:117] "RemoveContainer" containerID="aa1407d98acb35b29a51aefb0376dcac03265d8d36005b00520241f977e4b3ff" Nov 24 18:23:48 crc kubenswrapper[4768]: I1124 18:23:48.878386 4768 scope.go:117] "RemoveContainer" containerID="3133f602100ddbe18f4a72133473eb2d8963717dc24c9ed14f2ccb112a2a2029" Nov 24 18:23:48 crc kubenswrapper[4768]: I1124 18:23:48.916367 4768 scope.go:117] "RemoveContainer" containerID="1fa8c1a58e7ebc61eed2c7259d87ba0fd07ba89ea54ad0b48ad55b1964b7fccb" Nov 24 18:23:48 crc kubenswrapper[4768]: I1124 18:23:48.958920 4768 scope.go:117] "RemoveContainer" containerID="f0bd82ce6408f4d732f61e357e2f27322d7becc81c04d8b2bd9e65ff5d7f2c7a" Nov 24 18:23:49 crc kubenswrapper[4768]: I1124 18:23:49.038785 4768 scope.go:117] "RemoveContainer" containerID="a29ab38ec3305084257d701bd372b6c734ec329ace94a90731b4ffafa5a64890" Nov 24 18:23:49 crc kubenswrapper[4768]: I1124 18:23:49.066522 4768 scope.go:117] "RemoveContainer" containerID="429480a2ea66dc04bc3d43f98f64beb9e3240c5c33c2f54b7498e4b42367bccf" Nov 24 18:23:49 crc kubenswrapper[4768]: I1124 18:23:49.149290 4768 scope.go:117] "RemoveContainer" containerID="1ae642ea1a38d2cf0cc6d3050cf672a7f9e05473aa0526aa22844cab052d4588" Nov 24 18:23:49 crc kubenswrapper[4768]: I1124 18:23:49.174338 4768 scope.go:117] "RemoveContainer" containerID="4a6213edac136ac0c5576344cdef037bf92046653801179bfbf1b691baa63956" Nov 24 18:23:49 crc kubenswrapper[4768]: I1124 18:23:49.199050 4768 scope.go:117] "RemoveContainer" containerID="61d78c2ada8c5af9b55dedcd7f435044d4457f456a22b042642028aa2bf5753a" Nov 24 18:23:49 crc kubenswrapper[4768]: I1124 18:23:49.237812 4768 scope.go:117] "RemoveContainer" containerID="9c69262bc637d665057ffdf7d9990b1722d521d791c0320c9e8195b54eed1578" Nov 24 18:23:49 crc kubenswrapper[4768]: I1124 18:23:49.726944 4768 generic.go:334] "Generic (PLEG): container finished" podID="cfd40897-142e-42d7-9c3f-7e2e4f4c304c" containerID="3c921d4b29aa4d16bb81b919a27d9ab8e7c0ef399f0d7fb6072db257341c6d6f" exitCode=0 Nov 24 18:23:49 crc kubenswrapper[4768]: I1124 18:23:49.727001 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsdlv" event={"ID":"cfd40897-142e-42d7-9c3f-7e2e4f4c304c","Type":"ContainerDied","Data":"3c921d4b29aa4d16bb81b919a27d9ab8e7c0ef399f0d7fb6072db257341c6d6f"} Nov 24 18:23:50 crc kubenswrapper[4768]: I1124 18:23:50.738220 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsdlv" event={"ID":"cfd40897-142e-42d7-9c3f-7e2e4f4c304c","Type":"ContainerStarted","Data":"e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407"} Nov 24 18:23:50 crc kubenswrapper[4768]: I1124 18:23:50.762514 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lsdlv" podStartSLOduration=2.314350192 podStartE2EDuration="4.762475411s" podCreationTimestamp="2025-11-24 18:23:46 +0000 UTC" firstStartedPulling="2025-11-24 18:23:47.712654185 +0000 UTC m=+2066.573235962" lastFinishedPulling="2025-11-24 18:23:50.160779394 +0000 UTC m=+2069.021361181" observedRunningTime="2025-11-24 18:23:50.759607953 +0000 UTC m=+2069.620189760" watchObservedRunningTime="2025-11-24 18:23:50.762475411 +0000 UTC m=+2069.623057198" Nov 24 18:23:56 crc kubenswrapper[4768]: I1124 18:23:56.882635 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:56 crc kubenswrapper[4768]: I1124 18:23:56.883184 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:56 crc kubenswrapper[4768]: I1124 18:23:56.948406 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:57 crc kubenswrapper[4768]: I1124 18:23:57.845128 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:23:57 crc kubenswrapper[4768]: I1124 18:23:57.908518 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lsdlv"] Nov 24 18:23:59 crc kubenswrapper[4768]: I1124 18:23:59.816130 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lsdlv" podUID="cfd40897-142e-42d7-9c3f-7e2e4f4c304c" containerName="registry-server" containerID="cri-o://e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407" gracePeriod=2 Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.296999 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.465688 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbdlb\" (UniqueName: \"kubernetes.io/projected/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-kube-api-access-jbdlb\") pod \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\" (UID: \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\") " Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.465754 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-catalog-content\") pod \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\" (UID: \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\") " Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.465873 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-utilities\") pod \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\" (UID: \"cfd40897-142e-42d7-9c3f-7e2e4f4c304c\") " Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.467045 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-utilities" (OuterVolumeSpecName: "utilities") pod "cfd40897-142e-42d7-9c3f-7e2e4f4c304c" (UID: "cfd40897-142e-42d7-9c3f-7e2e4f4c304c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.474678 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-kube-api-access-jbdlb" (OuterVolumeSpecName: "kube-api-access-jbdlb") pod "cfd40897-142e-42d7-9c3f-7e2e4f4c304c" (UID: "cfd40897-142e-42d7-9c3f-7e2e4f4c304c"). InnerVolumeSpecName "kube-api-access-jbdlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.534817 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfd40897-142e-42d7-9c3f-7e2e4f4c304c" (UID: "cfd40897-142e-42d7-9c3f-7e2e4f4c304c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.567408 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbdlb\" (UniqueName: \"kubernetes.io/projected/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-kube-api-access-jbdlb\") on node \"crc\" DevicePath \"\"" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.567442 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.567459 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd40897-142e-42d7-9c3f-7e2e4f4c304c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.826930 4768 generic.go:334] "Generic (PLEG): container finished" podID="cfd40897-142e-42d7-9c3f-7e2e4f4c304c" containerID="e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407" exitCode=0 Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.826992 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsdlv" event={"ID":"cfd40897-142e-42d7-9c3f-7e2e4f4c304c","Type":"ContainerDied","Data":"e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407"} Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.827030 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsdlv" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.827051 4768 scope.go:117] "RemoveContainer" containerID="e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.827035 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsdlv" event={"ID":"cfd40897-142e-42d7-9c3f-7e2e4f4c304c","Type":"ContainerDied","Data":"77efed71139fc3e681564d56b3b1f9c27cbdc721dde6f0bbf5ece9ad39f69804"} Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.845230 4768 scope.go:117] "RemoveContainer" containerID="3c921d4b29aa4d16bb81b919a27d9ab8e7c0ef399f0d7fb6072db257341c6d6f" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.865338 4768 scope.go:117] "RemoveContainer" containerID="7d4bdd833fe0a5d1f53f6180b6fa3fac93c3f2cc4baca81b54aa9ad36e7f1835" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.868861 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lsdlv"] Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.880072 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lsdlv"] Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.903929 4768 scope.go:117] "RemoveContainer" containerID="e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407" Nov 24 18:24:00 crc kubenswrapper[4768]: E1124 18:24:00.905003 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407\": container with ID starting with e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407 not found: ID does not exist" containerID="e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.905038 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407"} err="failed to get container status \"e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407\": rpc error: code = NotFound desc = could not find container \"e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407\": container with ID starting with e5bcdac62938ef345bbd5e86108fb2443985608ea040955386bf515d22cf8407 not found: ID does not exist" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.905059 4768 scope.go:117] "RemoveContainer" containerID="3c921d4b29aa4d16bb81b919a27d9ab8e7c0ef399f0d7fb6072db257341c6d6f" Nov 24 18:24:00 crc kubenswrapper[4768]: E1124 18:24:00.905283 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c921d4b29aa4d16bb81b919a27d9ab8e7c0ef399f0d7fb6072db257341c6d6f\": container with ID starting with 3c921d4b29aa4d16bb81b919a27d9ab8e7c0ef399f0d7fb6072db257341c6d6f not found: ID does not exist" containerID="3c921d4b29aa4d16bb81b919a27d9ab8e7c0ef399f0d7fb6072db257341c6d6f" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.905308 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c921d4b29aa4d16bb81b919a27d9ab8e7c0ef399f0d7fb6072db257341c6d6f"} err="failed to get container status \"3c921d4b29aa4d16bb81b919a27d9ab8e7c0ef399f0d7fb6072db257341c6d6f\": rpc error: code = NotFound desc = could not find container \"3c921d4b29aa4d16bb81b919a27d9ab8e7c0ef399f0d7fb6072db257341c6d6f\": container with ID starting with 3c921d4b29aa4d16bb81b919a27d9ab8e7c0ef399f0d7fb6072db257341c6d6f not found: ID does not exist" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.905322 4768 scope.go:117] "RemoveContainer" containerID="7d4bdd833fe0a5d1f53f6180b6fa3fac93c3f2cc4baca81b54aa9ad36e7f1835" Nov 24 18:24:00 crc kubenswrapper[4768]: E1124 18:24:00.905573 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d4bdd833fe0a5d1f53f6180b6fa3fac93c3f2cc4baca81b54aa9ad36e7f1835\": container with ID starting with 7d4bdd833fe0a5d1f53f6180b6fa3fac93c3f2cc4baca81b54aa9ad36e7f1835 not found: ID does not exist" containerID="7d4bdd833fe0a5d1f53f6180b6fa3fac93c3f2cc4baca81b54aa9ad36e7f1835" Nov 24 18:24:00 crc kubenswrapper[4768]: I1124 18:24:00.905592 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d4bdd833fe0a5d1f53f6180b6fa3fac93c3f2cc4baca81b54aa9ad36e7f1835"} err="failed to get container status \"7d4bdd833fe0a5d1f53f6180b6fa3fac93c3f2cc4baca81b54aa9ad36e7f1835\": rpc error: code = NotFound desc = could not find container \"7d4bdd833fe0a5d1f53f6180b6fa3fac93c3f2cc4baca81b54aa9ad36e7f1835\": container with ID starting with 7d4bdd833fe0a5d1f53f6180b6fa3fac93c3f2cc4baca81b54aa9ad36e7f1835 not found: ID does not exist" Nov 24 18:24:01 crc kubenswrapper[4768]: I1124 18:24:01.911456 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfd40897-142e-42d7-9c3f-7e2e4f4c304c" path="/var/lib/kubelet/pods/cfd40897-142e-42d7-9c3f-7e2e4f4c304c/volumes" Nov 24 18:24:08 crc kubenswrapper[4768]: I1124 18:24:08.912681 4768 generic.go:334] "Generic (PLEG): container finished" podID="03a4429e-4032-4d71-adc7-7257ac152323" containerID="3323293d2bb2d29c5dce33e1f5eec9f7fd3c4138dfa642212aa94a1e8caf5c43" exitCode=2 Nov 24 18:24:08 crc kubenswrapper[4768]: I1124 18:24:08.912766 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" event={"ID":"03a4429e-4032-4d71-adc7-7257ac152323","Type":"ContainerDied","Data":"3323293d2bb2d29c5dce33e1f5eec9f7fd3c4138dfa642212aa94a1e8caf5c43"} Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.320417 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.443216 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-ssh-key\") pod \"03a4429e-4032-4d71-adc7-7257ac152323\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.443306 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlqjl\" (UniqueName: \"kubernetes.io/projected/03a4429e-4032-4d71-adc7-7257ac152323-kube-api-access-rlqjl\") pod \"03a4429e-4032-4d71-adc7-7257ac152323\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.443653 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-ceph\") pod \"03a4429e-4032-4d71-adc7-7257ac152323\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.443799 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-bootstrap-combined-ca-bundle\") pod \"03a4429e-4032-4d71-adc7-7257ac152323\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.443881 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-inventory\") pod \"03a4429e-4032-4d71-adc7-7257ac152323\" (UID: \"03a4429e-4032-4d71-adc7-7257ac152323\") " Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.451482 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a4429e-4032-4d71-adc7-7257ac152323-kube-api-access-rlqjl" (OuterVolumeSpecName: "kube-api-access-rlqjl") pod "03a4429e-4032-4d71-adc7-7257ac152323" (UID: "03a4429e-4032-4d71-adc7-7257ac152323"). InnerVolumeSpecName "kube-api-access-rlqjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.452053 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-ceph" (OuterVolumeSpecName: "ceph") pod "03a4429e-4032-4d71-adc7-7257ac152323" (UID: "03a4429e-4032-4d71-adc7-7257ac152323"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.452171 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "03a4429e-4032-4d71-adc7-7257ac152323" (UID: "03a4429e-4032-4d71-adc7-7257ac152323"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.471479 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "03a4429e-4032-4d71-adc7-7257ac152323" (UID: "03a4429e-4032-4d71-adc7-7257ac152323"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.479531 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-inventory" (OuterVolumeSpecName: "inventory") pod "03a4429e-4032-4d71-adc7-7257ac152323" (UID: "03a4429e-4032-4d71-adc7-7257ac152323"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.546410 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.546450 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.546462 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlqjl\" (UniqueName: \"kubernetes.io/projected/03a4429e-4032-4d71-adc7-7257ac152323-kube-api-access-rlqjl\") on node \"crc\" DevicePath \"\"" Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.546475 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.546503 4768 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a4429e-4032-4d71-adc7-7257ac152323-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.931710 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" event={"ID":"03a4429e-4032-4d71-adc7-7257ac152323","Type":"ContainerDied","Data":"7cddb0d4860a559bb2334c1a504dd2381e491b0e857cc7f048d69927acdff614"} Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.932087 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cddb0d4860a559bb2334c1a504dd2381e491b0e857cc7f048d69927acdff614" Nov 24 18:24:10 crc kubenswrapper[4768]: I1124 18:24:10.931727 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.030348 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2"] Nov 24 18:24:18 crc kubenswrapper[4768]: E1124 18:24:18.031330 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd40897-142e-42d7-9c3f-7e2e4f4c304c" containerName="extract-content" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.031343 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd40897-142e-42d7-9c3f-7e2e4f4c304c" containerName="extract-content" Nov 24 18:24:18 crc kubenswrapper[4768]: E1124 18:24:18.031358 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a4429e-4032-4d71-adc7-7257ac152323" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.031368 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a4429e-4032-4d71-adc7-7257ac152323" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:24:18 crc kubenswrapper[4768]: E1124 18:24:18.031392 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd40897-142e-42d7-9c3f-7e2e4f4c304c" containerName="extract-utilities" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.031398 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd40897-142e-42d7-9c3f-7e2e4f4c304c" containerName="extract-utilities" Nov 24 18:24:18 crc kubenswrapper[4768]: E1124 18:24:18.031412 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd40897-142e-42d7-9c3f-7e2e4f4c304c" containerName="registry-server" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.031418 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd40897-142e-42d7-9c3f-7e2e4f4c304c" containerName="registry-server" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.031678 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a4429e-4032-4d71-adc7-7257ac152323" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.031709 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfd40897-142e-42d7-9c3f-7e2e4f4c304c" containerName="registry-server" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.032341 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.034378 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.034599 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.034755 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.034868 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.036504 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.044098 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2"] Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.086595 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.086701 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.086739 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.086832 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hrth\" (UniqueName: \"kubernetes.io/projected/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-kube-api-access-8hrth\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.086874 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.188143 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.188207 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.188233 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.188288 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hrth\" (UniqueName: \"kubernetes.io/projected/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-kube-api-access-8hrth\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.188313 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.194898 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.194981 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.195466 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.196651 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.212900 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hrth\" (UniqueName: \"kubernetes.io/projected/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-kube-api-access-8hrth\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.351604 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.859740 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2"] Nov 24 18:24:18 crc kubenswrapper[4768]: I1124 18:24:18.995111 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" event={"ID":"0938fce9-58c6-4933-aeb3-49e2fe28bf0f","Type":"ContainerStarted","Data":"287be006f1865de677e73004ff6c8b4e09a12444101c17f6be5b2749a9c0af68"} Nov 24 18:24:21 crc kubenswrapper[4768]: I1124 18:24:21.177796 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" event={"ID":"0938fce9-58c6-4933-aeb3-49e2fe28bf0f","Type":"ContainerStarted","Data":"ed1e1626ee321a6a5082919b95fe49dbf59b4ed5ec820b61e51e4af18b7d28cc"} Nov 24 18:24:21 crc kubenswrapper[4768]: I1124 18:24:21.202114 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" podStartSLOduration=2.6985634210000002 podStartE2EDuration="3.202091036s" podCreationTimestamp="2025-11-24 18:24:18 +0000 UTC" firstStartedPulling="2025-11-24 18:24:18.863414645 +0000 UTC m=+2097.723996422" lastFinishedPulling="2025-11-24 18:24:19.36694226 +0000 UTC m=+2098.227524037" observedRunningTime="2025-11-24 18:24:21.19965281 +0000 UTC m=+2100.060234587" watchObservedRunningTime="2025-11-24 18:24:21.202091036 +0000 UTC m=+2100.062672813" Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.496673 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r6v6v"] Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.499210 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.504929 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r6v6v"] Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.656268 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdznm\" (UniqueName: \"kubernetes.io/projected/91465827-d86b-4aa9-8ee8-619e80cef039-kube-api-access-mdznm\") pod \"redhat-operators-r6v6v\" (UID: \"91465827-d86b-4aa9-8ee8-619e80cef039\") " pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.656327 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91465827-d86b-4aa9-8ee8-619e80cef039-catalog-content\") pod \"redhat-operators-r6v6v\" (UID: \"91465827-d86b-4aa9-8ee8-619e80cef039\") " pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.656377 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91465827-d86b-4aa9-8ee8-619e80cef039-utilities\") pod \"redhat-operators-r6v6v\" (UID: \"91465827-d86b-4aa9-8ee8-619e80cef039\") " pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.758617 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdznm\" (UniqueName: \"kubernetes.io/projected/91465827-d86b-4aa9-8ee8-619e80cef039-kube-api-access-mdznm\") pod \"redhat-operators-r6v6v\" (UID: \"91465827-d86b-4aa9-8ee8-619e80cef039\") " pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.758694 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91465827-d86b-4aa9-8ee8-619e80cef039-catalog-content\") pod \"redhat-operators-r6v6v\" (UID: \"91465827-d86b-4aa9-8ee8-619e80cef039\") " pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.758758 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91465827-d86b-4aa9-8ee8-619e80cef039-utilities\") pod \"redhat-operators-r6v6v\" (UID: \"91465827-d86b-4aa9-8ee8-619e80cef039\") " pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.759379 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91465827-d86b-4aa9-8ee8-619e80cef039-utilities\") pod \"redhat-operators-r6v6v\" (UID: \"91465827-d86b-4aa9-8ee8-619e80cef039\") " pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.759376 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91465827-d86b-4aa9-8ee8-619e80cef039-catalog-content\") pod \"redhat-operators-r6v6v\" (UID: \"91465827-d86b-4aa9-8ee8-619e80cef039\") " pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.778252 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdznm\" (UniqueName: \"kubernetes.io/projected/91465827-d86b-4aa9-8ee8-619e80cef039-kube-api-access-mdznm\") pod \"redhat-operators-r6v6v\" (UID: \"91465827-d86b-4aa9-8ee8-619e80cef039\") " pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:42 crc kubenswrapper[4768]: I1124 18:25:42.820526 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:43 crc kubenswrapper[4768]: I1124 18:25:43.315018 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r6v6v"] Nov 24 18:25:43 crc kubenswrapper[4768]: I1124 18:25:43.956173 4768 generic.go:334] "Generic (PLEG): container finished" podID="91465827-d86b-4aa9-8ee8-619e80cef039" containerID="33214a12217287b17da0e87edb8a96069851c7d6d6acebea4be8e0d4914509ab" exitCode=0 Nov 24 18:25:43 crc kubenswrapper[4768]: I1124 18:25:43.956400 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6v6v" event={"ID":"91465827-d86b-4aa9-8ee8-619e80cef039","Type":"ContainerDied","Data":"33214a12217287b17da0e87edb8a96069851c7d6d6acebea4be8e0d4914509ab"} Nov 24 18:25:43 crc kubenswrapper[4768]: I1124 18:25:43.956582 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6v6v" event={"ID":"91465827-d86b-4aa9-8ee8-619e80cef039","Type":"ContainerStarted","Data":"53ae4a3e203046901db451a40096bd3d4a9ece673a0305694a12c5b37d960925"} Nov 24 18:25:44 crc kubenswrapper[4768]: I1124 18:25:44.968020 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6v6v" event={"ID":"91465827-d86b-4aa9-8ee8-619e80cef039","Type":"ContainerStarted","Data":"25056017027ebc75c2b384c8f5d0c7d1e4f6345fc61d92f538285c70fa43e393"} Nov 24 18:25:46 crc kubenswrapper[4768]: I1124 18:25:46.990597 4768 generic.go:334] "Generic (PLEG): container finished" podID="91465827-d86b-4aa9-8ee8-619e80cef039" containerID="25056017027ebc75c2b384c8f5d0c7d1e4f6345fc61d92f538285c70fa43e393" exitCode=0 Nov 24 18:25:46 crc kubenswrapper[4768]: I1124 18:25:46.990654 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6v6v" event={"ID":"91465827-d86b-4aa9-8ee8-619e80cef039","Type":"ContainerDied","Data":"25056017027ebc75c2b384c8f5d0c7d1e4f6345fc61d92f538285c70fa43e393"} Nov 24 18:25:49 crc kubenswrapper[4768]: I1124 18:25:49.019413 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6v6v" event={"ID":"91465827-d86b-4aa9-8ee8-619e80cef039","Type":"ContainerStarted","Data":"01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9"} Nov 24 18:25:49 crc kubenswrapper[4768]: I1124 18:25:49.045806 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r6v6v" podStartSLOduration=2.860695641 podStartE2EDuration="7.045785834s" podCreationTimestamp="2025-11-24 18:25:42 +0000 UTC" firstStartedPulling="2025-11-24 18:25:43.958072946 +0000 UTC m=+2182.818654723" lastFinishedPulling="2025-11-24 18:25:48.143163149 +0000 UTC m=+2187.003744916" observedRunningTime="2025-11-24 18:25:49.039678027 +0000 UTC m=+2187.900259804" watchObservedRunningTime="2025-11-24 18:25:49.045785834 +0000 UTC m=+2187.906367611" Nov 24 18:25:52 crc kubenswrapper[4768]: I1124 18:25:52.821025 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:52 crc kubenswrapper[4768]: I1124 18:25:52.821622 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:25:53 crc kubenswrapper[4768]: I1124 18:25:53.059570 4768 generic.go:334] "Generic (PLEG): container finished" podID="0938fce9-58c6-4933-aeb3-49e2fe28bf0f" containerID="ed1e1626ee321a6a5082919b95fe49dbf59b4ed5ec820b61e51e4af18b7d28cc" exitCode=2 Nov 24 18:25:53 crc kubenswrapper[4768]: I1124 18:25:53.059614 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" event={"ID":"0938fce9-58c6-4933-aeb3-49e2fe28bf0f","Type":"ContainerDied","Data":"ed1e1626ee321a6a5082919b95fe49dbf59b4ed5ec820b61e51e4af18b7d28cc"} Nov 24 18:25:53 crc kubenswrapper[4768]: I1124 18:25:53.891249 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r6v6v" podUID="91465827-d86b-4aa9-8ee8-619e80cef039" containerName="registry-server" probeResult="failure" output=< Nov 24 18:25:53 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Nov 24 18:25:53 crc kubenswrapper[4768]: > Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.572063 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.624453 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hrth\" (UniqueName: \"kubernetes.io/projected/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-kube-api-access-8hrth\") pod \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.624984 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-ssh-key\") pod \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.625220 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-ceph\") pod \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.625499 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-inventory\") pod \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.625657 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-bootstrap-combined-ca-bundle\") pod \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\" (UID: \"0938fce9-58c6-4933-aeb3-49e2fe28bf0f\") " Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.632437 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-ceph" (OuterVolumeSpecName: "ceph") pod "0938fce9-58c6-4933-aeb3-49e2fe28bf0f" (UID: "0938fce9-58c6-4933-aeb3-49e2fe28bf0f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.633090 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0938fce9-58c6-4933-aeb3-49e2fe28bf0f" (UID: "0938fce9-58c6-4933-aeb3-49e2fe28bf0f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.633823 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-kube-api-access-8hrth" (OuterVolumeSpecName: "kube-api-access-8hrth") pod "0938fce9-58c6-4933-aeb3-49e2fe28bf0f" (UID: "0938fce9-58c6-4933-aeb3-49e2fe28bf0f"). InnerVolumeSpecName "kube-api-access-8hrth". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.661548 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-inventory" (OuterVolumeSpecName: "inventory") pod "0938fce9-58c6-4933-aeb3-49e2fe28bf0f" (UID: "0938fce9-58c6-4933-aeb3-49e2fe28bf0f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.667960 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0938fce9-58c6-4933-aeb3-49e2fe28bf0f" (UID: "0938fce9-58c6-4933-aeb3-49e2fe28bf0f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.728000 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.728037 4768 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.728054 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hrth\" (UniqueName: \"kubernetes.io/projected/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-kube-api-access-8hrth\") on node \"crc\" DevicePath \"\"" Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.728066 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:25:54 crc kubenswrapper[4768]: I1124 18:25:54.728079 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0938fce9-58c6-4933-aeb3-49e2fe28bf0f-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:25:55 crc kubenswrapper[4768]: I1124 18:25:55.086092 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" event={"ID":"0938fce9-58c6-4933-aeb3-49e2fe28bf0f","Type":"ContainerDied","Data":"287be006f1865de677e73004ff6c8b4e09a12444101c17f6be5b2749a9c0af68"} Nov 24 18:25:55 crc kubenswrapper[4768]: I1124 18:25:55.086590 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287be006f1865de677e73004ff6c8b4e09a12444101c17f6be5b2749a9c0af68" Nov 24 18:25:55 crc kubenswrapper[4768]: I1124 18:25:55.086221 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2" Nov 24 18:26:02 crc kubenswrapper[4768]: I1124 18:26:02.913081 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:26:02 crc kubenswrapper[4768]: I1124 18:26:02.988096 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:26:03 crc kubenswrapper[4768]: I1124 18:26:03.360718 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r6v6v"] Nov 24 18:26:04 crc kubenswrapper[4768]: I1124 18:26:04.178078 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r6v6v" podUID="91465827-d86b-4aa9-8ee8-619e80cef039" containerName="registry-server" containerID="cri-o://01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9" gracePeriod=2 Nov 24 18:26:04 crc kubenswrapper[4768]: I1124 18:26:04.658627 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:26:04 crc kubenswrapper[4768]: I1124 18:26:04.825604 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91465827-d86b-4aa9-8ee8-619e80cef039-utilities\") pod \"91465827-d86b-4aa9-8ee8-619e80cef039\" (UID: \"91465827-d86b-4aa9-8ee8-619e80cef039\") " Nov 24 18:26:04 crc kubenswrapper[4768]: I1124 18:26:04.825659 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdznm\" (UniqueName: \"kubernetes.io/projected/91465827-d86b-4aa9-8ee8-619e80cef039-kube-api-access-mdznm\") pod \"91465827-d86b-4aa9-8ee8-619e80cef039\" (UID: \"91465827-d86b-4aa9-8ee8-619e80cef039\") " Nov 24 18:26:04 crc kubenswrapper[4768]: I1124 18:26:04.825699 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91465827-d86b-4aa9-8ee8-619e80cef039-catalog-content\") pod \"91465827-d86b-4aa9-8ee8-619e80cef039\" (UID: \"91465827-d86b-4aa9-8ee8-619e80cef039\") " Nov 24 18:26:04 crc kubenswrapper[4768]: I1124 18:26:04.828399 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91465827-d86b-4aa9-8ee8-619e80cef039-utilities" (OuterVolumeSpecName: "utilities") pod "91465827-d86b-4aa9-8ee8-619e80cef039" (UID: "91465827-d86b-4aa9-8ee8-619e80cef039"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:26:04 crc kubenswrapper[4768]: I1124 18:26:04.835425 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91465827-d86b-4aa9-8ee8-619e80cef039-kube-api-access-mdznm" (OuterVolumeSpecName: "kube-api-access-mdznm") pod "91465827-d86b-4aa9-8ee8-619e80cef039" (UID: "91465827-d86b-4aa9-8ee8-619e80cef039"). InnerVolumeSpecName "kube-api-access-mdznm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:26:04 crc kubenswrapper[4768]: I1124 18:26:04.927688 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91465827-d86b-4aa9-8ee8-619e80cef039-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:26:04 crc kubenswrapper[4768]: I1124 18:26:04.927726 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdznm\" (UniqueName: \"kubernetes.io/projected/91465827-d86b-4aa9-8ee8-619e80cef039-kube-api-access-mdznm\") on node \"crc\" DevicePath \"\"" Nov 24 18:26:04 crc kubenswrapper[4768]: I1124 18:26:04.965119 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91465827-d86b-4aa9-8ee8-619e80cef039-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91465827-d86b-4aa9-8ee8-619e80cef039" (UID: "91465827-d86b-4aa9-8ee8-619e80cef039"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.030075 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91465827-d86b-4aa9-8ee8-619e80cef039-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.188113 4768 generic.go:334] "Generic (PLEG): container finished" podID="91465827-d86b-4aa9-8ee8-619e80cef039" containerID="01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9" exitCode=0 Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.188213 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r6v6v" Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.188247 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6v6v" event={"ID":"91465827-d86b-4aa9-8ee8-619e80cef039","Type":"ContainerDied","Data":"01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9"} Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.189005 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6v6v" event={"ID":"91465827-d86b-4aa9-8ee8-619e80cef039","Type":"ContainerDied","Data":"53ae4a3e203046901db451a40096bd3d4a9ece673a0305694a12c5b37d960925"} Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.189044 4768 scope.go:117] "RemoveContainer" containerID="01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9" Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.212496 4768 scope.go:117] "RemoveContainer" containerID="25056017027ebc75c2b384c8f5d0c7d1e4f6345fc61d92f538285c70fa43e393" Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.240256 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r6v6v"] Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.247675 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r6v6v"] Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.256370 4768 scope.go:117] "RemoveContainer" containerID="33214a12217287b17da0e87edb8a96069851c7d6d6acebea4be8e0d4914509ab" Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.295175 4768 scope.go:117] "RemoveContainer" containerID="01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9" Nov 24 18:26:05 crc kubenswrapper[4768]: E1124 18:26:05.295637 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9\": container with ID starting with 01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9 not found: ID does not exist" containerID="01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9" Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.295676 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9"} err="failed to get container status \"01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9\": rpc error: code = NotFound desc = could not find container \"01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9\": container with ID starting with 01acb14a407f71a0ff82ca2c5b9349c80d326988aa0dd61825f28b3260bb56a9 not found: ID does not exist" Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.295703 4768 scope.go:117] "RemoveContainer" containerID="25056017027ebc75c2b384c8f5d0c7d1e4f6345fc61d92f538285c70fa43e393" Nov 24 18:26:05 crc kubenswrapper[4768]: E1124 18:26:05.296218 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25056017027ebc75c2b384c8f5d0c7d1e4f6345fc61d92f538285c70fa43e393\": container with ID starting with 25056017027ebc75c2b384c8f5d0c7d1e4f6345fc61d92f538285c70fa43e393 not found: ID does not exist" containerID="25056017027ebc75c2b384c8f5d0c7d1e4f6345fc61d92f538285c70fa43e393" Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.296277 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25056017027ebc75c2b384c8f5d0c7d1e4f6345fc61d92f538285c70fa43e393"} err="failed to get container status \"25056017027ebc75c2b384c8f5d0c7d1e4f6345fc61d92f538285c70fa43e393\": rpc error: code = NotFound desc = could not find container \"25056017027ebc75c2b384c8f5d0c7d1e4f6345fc61d92f538285c70fa43e393\": container with ID starting with 25056017027ebc75c2b384c8f5d0c7d1e4f6345fc61d92f538285c70fa43e393 not found: ID does not exist" Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.296309 4768 scope.go:117] "RemoveContainer" containerID="33214a12217287b17da0e87edb8a96069851c7d6d6acebea4be8e0d4914509ab" Nov 24 18:26:05 crc kubenswrapper[4768]: E1124 18:26:05.296755 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33214a12217287b17da0e87edb8a96069851c7d6d6acebea4be8e0d4914509ab\": container with ID starting with 33214a12217287b17da0e87edb8a96069851c7d6d6acebea4be8e0d4914509ab not found: ID does not exist" containerID="33214a12217287b17da0e87edb8a96069851c7d6d6acebea4be8e0d4914509ab" Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.296796 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33214a12217287b17da0e87edb8a96069851c7d6d6acebea4be8e0d4914509ab"} err="failed to get container status \"33214a12217287b17da0e87edb8a96069851c7d6d6acebea4be8e0d4914509ab\": rpc error: code = NotFound desc = could not find container \"33214a12217287b17da0e87edb8a96069851c7d6d6acebea4be8e0d4914509ab\": container with ID starting with 33214a12217287b17da0e87edb8a96069851c7d6d6acebea4be8e0d4914509ab not found: ID does not exist" Nov 24 18:26:05 crc kubenswrapper[4768]: I1124 18:26:05.911329 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91465827-d86b-4aa9-8ee8-619e80cef039" path="/var/lib/kubelet/pods/91465827-d86b-4aa9-8ee8-619e80cef039/volumes" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.042239 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d"] Nov 24 18:26:12 crc kubenswrapper[4768]: E1124 18:26:12.043149 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91465827-d86b-4aa9-8ee8-619e80cef039" containerName="extract-content" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.043161 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="91465827-d86b-4aa9-8ee8-619e80cef039" containerName="extract-content" Nov 24 18:26:12 crc kubenswrapper[4768]: E1124 18:26:12.043176 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91465827-d86b-4aa9-8ee8-619e80cef039" containerName="extract-utilities" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.043185 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="91465827-d86b-4aa9-8ee8-619e80cef039" containerName="extract-utilities" Nov 24 18:26:12 crc kubenswrapper[4768]: E1124 18:26:12.043199 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91465827-d86b-4aa9-8ee8-619e80cef039" containerName="registry-server" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.043206 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="91465827-d86b-4aa9-8ee8-619e80cef039" containerName="registry-server" Nov 24 18:26:12 crc kubenswrapper[4768]: E1124 18:26:12.043225 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0938fce9-58c6-4933-aeb3-49e2fe28bf0f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.043231 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0938fce9-58c6-4933-aeb3-49e2fe28bf0f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.043393 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="91465827-d86b-4aa9-8ee8-619e80cef039" containerName="registry-server" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.043418 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0938fce9-58c6-4933-aeb3-49e2fe28bf0f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.044040 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.048946 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.049165 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.049830 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.049968 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.050413 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.053884 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d"] Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.070668 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.070704 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.070743 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hxsp\" (UniqueName: \"kubernetes.io/projected/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-kube-api-access-2hxsp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.070859 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.070894 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.171760 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.171809 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.171841 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hxsp\" (UniqueName: \"kubernetes.io/projected/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-kube-api-access-2hxsp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.171947 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.171983 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.177339 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.177529 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.178301 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.181202 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.189940 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hxsp\" (UniqueName: \"kubernetes.io/projected/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-kube-api-access-2hxsp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.375257 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:26:12 crc kubenswrapper[4768]: I1124 18:26:12.936456 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d"] Nov 24 18:26:13 crc kubenswrapper[4768]: I1124 18:26:13.263349 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" event={"ID":"ca59c4d5-5455-49a2-885e-d6e8eb3103fd","Type":"ContainerStarted","Data":"8bb15367d024a92a07284b0bd0d14e066d676335bbd8af6ca56ef7debff669ab"} Nov 24 18:26:13 crc kubenswrapper[4768]: I1124 18:26:13.655991 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:26:13 crc kubenswrapper[4768]: I1124 18:26:13.656062 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:26:14 crc kubenswrapper[4768]: I1124 18:26:14.271363 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" event={"ID":"ca59c4d5-5455-49a2-885e-d6e8eb3103fd","Type":"ContainerStarted","Data":"be599b70b9925efc3c98f922392e1439019947a6c12cf5c1d79e8379e097153c"} Nov 24 18:26:14 crc kubenswrapper[4768]: I1124 18:26:14.292132 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" podStartSLOduration=1.635137044 podStartE2EDuration="2.292113232s" podCreationTimestamp="2025-11-24 18:26:12 +0000 UTC" firstStartedPulling="2025-11-24 18:26:12.94146209 +0000 UTC m=+2211.802043867" lastFinishedPulling="2025-11-24 18:26:13.598438278 +0000 UTC m=+2212.459020055" observedRunningTime="2025-11-24 18:26:14.289413228 +0000 UTC m=+2213.149995005" watchObservedRunningTime="2025-11-24 18:26:14.292113232 +0000 UTC m=+2213.152695009" Nov 24 18:26:43 crc kubenswrapper[4768]: I1124 18:26:43.657566 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:26:43 crc kubenswrapper[4768]: I1124 18:26:43.658637 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:27:09 crc kubenswrapper[4768]: I1124 18:27:09.818037 4768 generic.go:334] "Generic (PLEG): container finished" podID="ca59c4d5-5455-49a2-885e-d6e8eb3103fd" containerID="be599b70b9925efc3c98f922392e1439019947a6c12cf5c1d79e8379e097153c" exitCode=2 Nov 24 18:27:09 crc kubenswrapper[4768]: I1124 18:27:09.818162 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" event={"ID":"ca59c4d5-5455-49a2-885e-d6e8eb3103fd","Type":"ContainerDied","Data":"be599b70b9925efc3c98f922392e1439019947a6c12cf5c1d79e8379e097153c"} Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.347990 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.429817 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-inventory\") pod \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.429908 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-ssh-key\") pod \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.430000 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hxsp\" (UniqueName: \"kubernetes.io/projected/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-kube-api-access-2hxsp\") pod \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.430056 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-ceph\") pod \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.430121 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-bootstrap-combined-ca-bundle\") pod \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\" (UID: \"ca59c4d5-5455-49a2-885e-d6e8eb3103fd\") " Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.436201 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-ceph" (OuterVolumeSpecName: "ceph") pod "ca59c4d5-5455-49a2-885e-d6e8eb3103fd" (UID: "ca59c4d5-5455-49a2-885e-d6e8eb3103fd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.436218 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-kube-api-access-2hxsp" (OuterVolumeSpecName: "kube-api-access-2hxsp") pod "ca59c4d5-5455-49a2-885e-d6e8eb3103fd" (UID: "ca59c4d5-5455-49a2-885e-d6e8eb3103fd"). InnerVolumeSpecName "kube-api-access-2hxsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.437956 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ca59c4d5-5455-49a2-885e-d6e8eb3103fd" (UID: "ca59c4d5-5455-49a2-885e-d6e8eb3103fd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.456382 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ca59c4d5-5455-49a2-885e-d6e8eb3103fd" (UID: "ca59c4d5-5455-49a2-885e-d6e8eb3103fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.457808 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-inventory" (OuterVolumeSpecName: "inventory") pod "ca59c4d5-5455-49a2-885e-d6e8eb3103fd" (UID: "ca59c4d5-5455-49a2-885e-d6e8eb3103fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.532333 4768 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.532371 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.532381 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.532390 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hxsp\" (UniqueName: \"kubernetes.io/projected/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-kube-api-access-2hxsp\") on node \"crc\" DevicePath \"\"" Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.532398 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca59c4d5-5455-49a2-885e-d6e8eb3103fd-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.839015 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" event={"ID":"ca59c4d5-5455-49a2-885e-d6e8eb3103fd","Type":"ContainerDied","Data":"8bb15367d024a92a07284b0bd0d14e066d676335bbd8af6ca56ef7debff669ab"} Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.839403 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb15367d024a92a07284b0bd0d14e066d676335bbd8af6ca56ef7debff669ab" Nov 24 18:27:11 crc kubenswrapper[4768]: I1124 18:27:11.839087 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d" Nov 24 18:27:13 crc kubenswrapper[4768]: I1124 18:27:13.657065 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:27:13 crc kubenswrapper[4768]: I1124 18:27:13.657165 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:27:13 crc kubenswrapper[4768]: I1124 18:27:13.657264 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 18:27:13 crc kubenswrapper[4768]: I1124 18:27:13.658453 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 18:27:13 crc kubenswrapper[4768]: I1124 18:27:13.658634 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" gracePeriod=600 Nov 24 18:27:13 crc kubenswrapper[4768]: E1124 18:27:13.787646 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:27:13 crc kubenswrapper[4768]: I1124 18:27:13.862289 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" exitCode=0 Nov 24 18:27:13 crc kubenswrapper[4768]: I1124 18:27:13.862358 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59"} Nov 24 18:27:13 crc kubenswrapper[4768]: I1124 18:27:13.862409 4768 scope.go:117] "RemoveContainer" containerID="f1549e11399ec1dbcd19fbf82542cd22671bbfb667e034644ac3a36883dd42c3" Nov 24 18:27:13 crc kubenswrapper[4768]: I1124 18:27:13.863826 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:27:13 crc kubenswrapper[4768]: E1124 18:27:13.865288 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:27:25 crc kubenswrapper[4768]: I1124 18:27:25.898743 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:27:25 crc kubenswrapper[4768]: E1124 18:27:25.899365 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:27:36 crc kubenswrapper[4768]: I1124 18:27:36.899076 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:27:36 crc kubenswrapper[4768]: E1124 18:27:36.900162 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:27:48 crc kubenswrapper[4768]: I1124 18:27:48.898076 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:27:48 crc kubenswrapper[4768]: E1124 18:27:48.898868 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.031812 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b"] Nov 24 18:27:49 crc kubenswrapper[4768]: E1124 18:27:49.032307 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca59c4d5-5455-49a2-885e-d6e8eb3103fd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.032333 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca59c4d5-5455-49a2-885e-d6e8eb3103fd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.032566 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca59c4d5-5455-49a2-885e-d6e8eb3103fd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.033294 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.038349 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.038613 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.038768 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.038819 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.039467 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.048372 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b"] Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.154908 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.154963 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.155026 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.155092 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.155314 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5vsm\" (UniqueName: \"kubernetes.io/projected/0d74256a-a4fc-4ecf-a57c-09aa5686878b-kube-api-access-b5vsm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.257373 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.257466 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.257659 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5vsm\" (UniqueName: \"kubernetes.io/projected/0d74256a-a4fc-4ecf-a57c-09aa5686878b-kube-api-access-b5vsm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.257789 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.257829 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.266170 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.266228 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.266651 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.267098 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.274928 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5vsm\" (UniqueName: \"kubernetes.io/projected/0d74256a-a4fc-4ecf-a57c-09aa5686878b-kube-api-access-b5vsm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.370206 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:27:49 crc kubenswrapper[4768]: I1124 18:27:49.971696 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b"] Nov 24 18:27:50 crc kubenswrapper[4768]: I1124 18:27:50.325576 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" event={"ID":"0d74256a-a4fc-4ecf-a57c-09aa5686878b","Type":"ContainerStarted","Data":"7bcac145c96b09260d911bd013da52c3e3f3b42bef80dfa534f30bac4c552ea3"} Nov 24 18:27:51 crc kubenswrapper[4768]: I1124 18:27:51.341467 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" event={"ID":"0d74256a-a4fc-4ecf-a57c-09aa5686878b","Type":"ContainerStarted","Data":"3056e5fdd20a04d2892e05b1d618ebebe0bb125cb35434cfc2ef87d6a9c2557c"} Nov 24 18:27:51 crc kubenswrapper[4768]: I1124 18:27:51.377139 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" podStartSLOduration=1.8087223799999999 podStartE2EDuration="2.377113387s" podCreationTimestamp="2025-11-24 18:27:49 +0000 UTC" firstStartedPulling="2025-11-24 18:27:49.984647584 +0000 UTC m=+2308.845229371" lastFinishedPulling="2025-11-24 18:27:50.553038561 +0000 UTC m=+2309.413620378" observedRunningTime="2025-11-24 18:27:51.365391397 +0000 UTC m=+2310.225973194" watchObservedRunningTime="2025-11-24 18:27:51.377113387 +0000 UTC m=+2310.237695194" Nov 24 18:28:01 crc kubenswrapper[4768]: I1124 18:28:01.909755 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:28:01 crc kubenswrapper[4768]: E1124 18:28:01.910802 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:28:14 crc kubenswrapper[4768]: I1124 18:28:14.898471 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:28:14 crc kubenswrapper[4768]: E1124 18:28:14.899420 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:28:26 crc kubenswrapper[4768]: I1124 18:28:26.898423 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:28:26 crc kubenswrapper[4768]: E1124 18:28:26.899301 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:28:41 crc kubenswrapper[4768]: I1124 18:28:41.905293 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:28:41 crc kubenswrapper[4768]: E1124 18:28:41.906029 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:28:53 crc kubenswrapper[4768]: I1124 18:28:53.899072 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:28:53 crc kubenswrapper[4768]: E1124 18:28:53.900190 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:29:05 crc kubenswrapper[4768]: I1124 18:29:05.898803 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:29:05 crc kubenswrapper[4768]: E1124 18:29:05.899737 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:29:20 crc kubenswrapper[4768]: I1124 18:29:20.899255 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:29:20 crc kubenswrapper[4768]: E1124 18:29:20.900341 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:29:26 crc kubenswrapper[4768]: I1124 18:29:26.259197 4768 generic.go:334] "Generic (PLEG): container finished" podID="0d74256a-a4fc-4ecf-a57c-09aa5686878b" containerID="3056e5fdd20a04d2892e05b1d618ebebe0bb125cb35434cfc2ef87d6a9c2557c" exitCode=0 Nov 24 18:29:26 crc kubenswrapper[4768]: I1124 18:29:26.259789 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" event={"ID":"0d74256a-a4fc-4ecf-a57c-09aa5686878b","Type":"ContainerDied","Data":"3056e5fdd20a04d2892e05b1d618ebebe0bb125cb35434cfc2ef87d6a9c2557c"} Nov 24 18:29:27 crc kubenswrapper[4768]: I1124 18:29:27.731159 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:29:27 crc kubenswrapper[4768]: I1124 18:29:27.921724 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-bootstrap-combined-ca-bundle\") pod \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " Nov 24 18:29:27 crc kubenswrapper[4768]: I1124 18:29:27.921764 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5vsm\" (UniqueName: \"kubernetes.io/projected/0d74256a-a4fc-4ecf-a57c-09aa5686878b-kube-api-access-b5vsm\") pod \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " Nov 24 18:29:27 crc kubenswrapper[4768]: I1124 18:29:27.921903 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-ssh-key\") pod \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " Nov 24 18:29:27 crc kubenswrapper[4768]: I1124 18:29:27.922008 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-ceph\") pod \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " Nov 24 18:29:27 crc kubenswrapper[4768]: I1124 18:29:27.922142 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-inventory\") pod \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\" (UID: \"0d74256a-a4fc-4ecf-a57c-09aa5686878b\") " Nov 24 18:29:27 crc kubenswrapper[4768]: I1124 18:29:27.927444 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-ceph" (OuterVolumeSpecName: "ceph") pod "0d74256a-a4fc-4ecf-a57c-09aa5686878b" (UID: "0d74256a-a4fc-4ecf-a57c-09aa5686878b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:29:27 crc kubenswrapper[4768]: I1124 18:29:27.927894 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0d74256a-a4fc-4ecf-a57c-09aa5686878b" (UID: "0d74256a-a4fc-4ecf-a57c-09aa5686878b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:29:27 crc kubenswrapper[4768]: I1124 18:29:27.928622 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d74256a-a4fc-4ecf-a57c-09aa5686878b-kube-api-access-b5vsm" (OuterVolumeSpecName: "kube-api-access-b5vsm") pod "0d74256a-a4fc-4ecf-a57c-09aa5686878b" (UID: "0d74256a-a4fc-4ecf-a57c-09aa5686878b"). InnerVolumeSpecName "kube-api-access-b5vsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:29:27 crc kubenswrapper[4768]: I1124 18:29:27.954001 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0d74256a-a4fc-4ecf-a57c-09aa5686878b" (UID: "0d74256a-a4fc-4ecf-a57c-09aa5686878b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:29:27 crc kubenswrapper[4768]: I1124 18:29:27.966917 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-inventory" (OuterVolumeSpecName: "inventory") pod "0d74256a-a4fc-4ecf-a57c-09aa5686878b" (UID: "0d74256a-a4fc-4ecf-a57c-09aa5686878b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.024918 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.024971 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.024988 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.025004 4768 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d74256a-a4fc-4ecf-a57c-09aa5686878b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.025027 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5vsm\" (UniqueName: \"kubernetes.io/projected/0d74256a-a4fc-4ecf-a57c-09aa5686878b-kube-api-access-b5vsm\") on node \"crc\" DevicePath \"\"" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.279831 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" event={"ID":"0d74256a-a4fc-4ecf-a57c-09aa5686878b","Type":"ContainerDied","Data":"7bcac145c96b09260d911bd013da52c3e3f3b42bef80dfa534f30bac4c552ea3"} Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.279891 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bcac145c96b09260d911bd013da52c3e3f3b42bef80dfa534f30bac4c552ea3" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.279925 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.388166 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7"] Nov 24 18:29:28 crc kubenswrapper[4768]: E1124 18:29:28.389125 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d74256a-a4fc-4ecf-a57c-09aa5686878b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.389199 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d74256a-a4fc-4ecf-a57c-09aa5686878b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.389455 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d74256a-a4fc-4ecf-a57c-09aa5686878b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.390113 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.392711 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.392902 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.392934 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.392981 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.393057 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.403756 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7"] Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.533373 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.533619 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.533679 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.533836 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ckqs\" (UniqueName: \"kubernetes.io/projected/1dd3638b-dad5-4d28-8451-1ef9cbe46251-kube-api-access-7ckqs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.635812 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.635900 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.635957 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ckqs\" (UniqueName: \"kubernetes.io/projected/1dd3638b-dad5-4d28-8451-1ef9cbe46251-kube-api-access-7ckqs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.636010 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.639854 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.640389 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.640844 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.657771 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ckqs\" (UniqueName: \"kubernetes.io/projected/1dd3638b-dad5-4d28-8451-1ef9cbe46251-kube-api-access-7ckqs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:28 crc kubenswrapper[4768]: I1124 18:29:28.720418 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:29 crc kubenswrapper[4768]: I1124 18:29:29.256660 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7"] Nov 24 18:29:29 crc kubenswrapper[4768]: I1124 18:29:29.269051 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 18:29:29 crc kubenswrapper[4768]: I1124 18:29:29.289760 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" event={"ID":"1dd3638b-dad5-4d28-8451-1ef9cbe46251","Type":"ContainerStarted","Data":"9c2f994c9a62eb4ad3ead65b79f38aba51ecbadef503648f6b7034382d27b381"} Nov 24 18:29:30 crc kubenswrapper[4768]: I1124 18:29:30.300287 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" event={"ID":"1dd3638b-dad5-4d28-8451-1ef9cbe46251","Type":"ContainerStarted","Data":"0c6d928af888a27318385ab49c07dd861f20f2c68081c60fd82f7c8450787844"} Nov 24 18:29:30 crc kubenswrapper[4768]: I1124 18:29:30.331207 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" podStartSLOduration=1.74935551 podStartE2EDuration="2.331183564s" podCreationTimestamp="2025-11-24 18:29:28 +0000 UTC" firstStartedPulling="2025-11-24 18:29:29.268867206 +0000 UTC m=+2408.129448983" lastFinishedPulling="2025-11-24 18:29:29.85069526 +0000 UTC m=+2408.711277037" observedRunningTime="2025-11-24 18:29:30.320567454 +0000 UTC m=+2409.181149231" watchObservedRunningTime="2025-11-24 18:29:30.331183564 +0000 UTC m=+2409.191765341" Nov 24 18:29:31 crc kubenswrapper[4768]: I1124 18:29:31.905156 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:29:31 crc kubenswrapper[4768]: E1124 18:29:31.905886 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:29:42 crc kubenswrapper[4768]: I1124 18:29:42.899161 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:29:42 crc kubenswrapper[4768]: E1124 18:29:42.899906 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:29:55 crc kubenswrapper[4768]: I1124 18:29:55.543511 4768 generic.go:334] "Generic (PLEG): container finished" podID="1dd3638b-dad5-4d28-8451-1ef9cbe46251" containerID="0c6d928af888a27318385ab49c07dd861f20f2c68081c60fd82f7c8450787844" exitCode=0 Nov 24 18:29:55 crc kubenswrapper[4768]: I1124 18:29:55.543559 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" event={"ID":"1dd3638b-dad5-4d28-8451-1ef9cbe46251","Type":"ContainerDied","Data":"0c6d928af888a27318385ab49c07dd861f20f2c68081c60fd82f7c8450787844"} Nov 24 18:29:56 crc kubenswrapper[4768]: I1124 18:29:56.979701 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.088766 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ckqs\" (UniqueName: \"kubernetes.io/projected/1dd3638b-dad5-4d28-8451-1ef9cbe46251-kube-api-access-7ckqs\") pod \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.088919 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-ssh-key\") pod \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.089019 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-ceph\") pod \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.089166 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-inventory\") pod \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\" (UID: \"1dd3638b-dad5-4d28-8451-1ef9cbe46251\") " Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.099428 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-ceph" (OuterVolumeSpecName: "ceph") pod "1dd3638b-dad5-4d28-8451-1ef9cbe46251" (UID: "1dd3638b-dad5-4d28-8451-1ef9cbe46251"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.107718 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd3638b-dad5-4d28-8451-1ef9cbe46251-kube-api-access-7ckqs" (OuterVolumeSpecName: "kube-api-access-7ckqs") pod "1dd3638b-dad5-4d28-8451-1ef9cbe46251" (UID: "1dd3638b-dad5-4d28-8451-1ef9cbe46251"). InnerVolumeSpecName "kube-api-access-7ckqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.128927 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-inventory" (OuterVolumeSpecName: "inventory") pod "1dd3638b-dad5-4d28-8451-1ef9cbe46251" (UID: "1dd3638b-dad5-4d28-8451-1ef9cbe46251"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.133331 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1dd3638b-dad5-4d28-8451-1ef9cbe46251" (UID: "1dd3638b-dad5-4d28-8451-1ef9cbe46251"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.191970 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.192048 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ckqs\" (UniqueName: \"kubernetes.io/projected/1dd3638b-dad5-4d28-8451-1ef9cbe46251-kube-api-access-7ckqs\") on node \"crc\" DevicePath \"\"" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.192080 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.192105 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1dd3638b-dad5-4d28-8451-1ef9cbe46251-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.569630 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" event={"ID":"1dd3638b-dad5-4d28-8451-1ef9cbe46251","Type":"ContainerDied","Data":"9c2f994c9a62eb4ad3ead65b79f38aba51ecbadef503648f6b7034382d27b381"} Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.570071 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c2f994c9a62eb4ad3ead65b79f38aba51ecbadef503648f6b7034382d27b381" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.569710 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.733931 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8"] Nov 24 18:29:57 crc kubenswrapper[4768]: E1124 18:29:57.734777 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd3638b-dad5-4d28-8451-1ef9cbe46251" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.734896 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd3638b-dad5-4d28-8451-1ef9cbe46251" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.735216 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd3638b-dad5-4d28-8451-1ef9cbe46251" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.736198 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.738652 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.738753 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.738703 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.738712 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.739206 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.743886 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8"] Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.898731 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:29:57 crc kubenswrapper[4768]: E1124 18:29:57.899060 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.904222 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbjbz\" (UniqueName: \"kubernetes.io/projected/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-kube-api-access-xbjbz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-84jr8\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.904289 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-84jr8\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.904347 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-84jr8\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:57 crc kubenswrapper[4768]: I1124 18:29:57.904507 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-84jr8\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:58 crc kubenswrapper[4768]: I1124 18:29:58.005810 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-84jr8\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:58 crc kubenswrapper[4768]: I1124 18:29:58.005948 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-84jr8\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:58 crc kubenswrapper[4768]: I1124 18:29:58.006003 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbjbz\" (UniqueName: \"kubernetes.io/projected/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-kube-api-access-xbjbz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-84jr8\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:58 crc kubenswrapper[4768]: I1124 18:29:58.006077 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-84jr8\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:58 crc kubenswrapper[4768]: I1124 18:29:58.010518 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-84jr8\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:58 crc kubenswrapper[4768]: I1124 18:29:58.010659 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-84jr8\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:58 crc kubenswrapper[4768]: I1124 18:29:58.011889 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-84jr8\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:58 crc kubenswrapper[4768]: I1124 18:29:58.026791 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbjbz\" (UniqueName: \"kubernetes.io/projected/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-kube-api-access-xbjbz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-84jr8\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:58 crc kubenswrapper[4768]: I1124 18:29:58.056213 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:29:58 crc kubenswrapper[4768]: I1124 18:29:58.586829 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8"] Nov 24 18:29:58 crc kubenswrapper[4768]: W1124 18:29:58.597272 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ca0ce9c_abe8_49c5_9aed_d63e4bae7811.slice/crio-88a31b7fe3e7d8d57f365fed369ec17e6acfef582cd7988a8c4daa87c309a0b0 WatchSource:0}: Error finding container 88a31b7fe3e7d8d57f365fed369ec17e6acfef582cd7988a8c4daa87c309a0b0: Status 404 returned error can't find the container with id 88a31b7fe3e7d8d57f365fed369ec17e6acfef582cd7988a8c4daa87c309a0b0 Nov 24 18:29:59 crc kubenswrapper[4768]: I1124 18:29:59.591241 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" event={"ID":"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811","Type":"ContainerStarted","Data":"4c5cd3bc882ec07f5ee2de5f09423ffca2d144641b2056fb349bfc0c5ab4d615"} Nov 24 18:29:59 crc kubenswrapper[4768]: I1124 18:29:59.591688 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" event={"ID":"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811","Type":"ContainerStarted","Data":"88a31b7fe3e7d8d57f365fed369ec17e6acfef582cd7988a8c4daa87c309a0b0"} Nov 24 18:29:59 crc kubenswrapper[4768]: I1124 18:29:59.615090 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" podStartSLOduration=2.125280974 podStartE2EDuration="2.615064702s" podCreationTimestamp="2025-11-24 18:29:57 +0000 UTC" firstStartedPulling="2025-11-24 18:29:58.600798247 +0000 UTC m=+2437.461380024" lastFinishedPulling="2025-11-24 18:29:59.090581975 +0000 UTC m=+2437.951163752" observedRunningTime="2025-11-24 18:29:59.607654379 +0000 UTC m=+2438.468236156" watchObservedRunningTime="2025-11-24 18:29:59.615064702 +0000 UTC m=+2438.475646479" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.140375 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx"] Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.142093 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.147761 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.153037 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.155583 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx"] Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.249260 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/620048ac-f859-4df2-bc3a-7111daa9db60-secret-volume\") pod \"collect-profiles-29400150-9h7qx\" (UID: \"620048ac-f859-4df2-bc3a-7111daa9db60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.249401 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4455r\" (UniqueName: \"kubernetes.io/projected/620048ac-f859-4df2-bc3a-7111daa9db60-kube-api-access-4455r\") pod \"collect-profiles-29400150-9h7qx\" (UID: \"620048ac-f859-4df2-bc3a-7111daa9db60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.249478 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/620048ac-f859-4df2-bc3a-7111daa9db60-config-volume\") pod \"collect-profiles-29400150-9h7qx\" (UID: \"620048ac-f859-4df2-bc3a-7111daa9db60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.351697 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/620048ac-f859-4df2-bc3a-7111daa9db60-config-volume\") pod \"collect-profiles-29400150-9h7qx\" (UID: \"620048ac-f859-4df2-bc3a-7111daa9db60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.352224 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/620048ac-f859-4df2-bc3a-7111daa9db60-secret-volume\") pod \"collect-profiles-29400150-9h7qx\" (UID: \"620048ac-f859-4df2-bc3a-7111daa9db60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.352384 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4455r\" (UniqueName: \"kubernetes.io/projected/620048ac-f859-4df2-bc3a-7111daa9db60-kube-api-access-4455r\") pod \"collect-profiles-29400150-9h7qx\" (UID: \"620048ac-f859-4df2-bc3a-7111daa9db60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.352970 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/620048ac-f859-4df2-bc3a-7111daa9db60-config-volume\") pod \"collect-profiles-29400150-9h7qx\" (UID: \"620048ac-f859-4df2-bc3a-7111daa9db60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.359417 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/620048ac-f859-4df2-bc3a-7111daa9db60-secret-volume\") pod \"collect-profiles-29400150-9h7qx\" (UID: \"620048ac-f859-4df2-bc3a-7111daa9db60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.371284 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4455r\" (UniqueName: \"kubernetes.io/projected/620048ac-f859-4df2-bc3a-7111daa9db60-kube-api-access-4455r\") pod \"collect-profiles-29400150-9h7qx\" (UID: \"620048ac-f859-4df2-bc3a-7111daa9db60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.472666 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:00 crc kubenswrapper[4768]: I1124 18:30:00.955699 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx"] Nov 24 18:30:01 crc kubenswrapper[4768]: I1124 18:30:01.630088 4768 generic.go:334] "Generic (PLEG): container finished" podID="620048ac-f859-4df2-bc3a-7111daa9db60" containerID="feabe1b78a8870a375e4b46f8bf3caef46b8c42289258f643a1f42c66f7162ac" exitCode=0 Nov 24 18:30:01 crc kubenswrapper[4768]: I1124 18:30:01.630181 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" event={"ID":"620048ac-f859-4df2-bc3a-7111daa9db60","Type":"ContainerDied","Data":"feabe1b78a8870a375e4b46f8bf3caef46b8c42289258f643a1f42c66f7162ac"} Nov 24 18:30:01 crc kubenswrapper[4768]: I1124 18:30:01.630839 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" event={"ID":"620048ac-f859-4df2-bc3a-7111daa9db60","Type":"ContainerStarted","Data":"cac3ca26f321eba86337aa491a4b836132c6c85bded64f0867d0a9dcdc40d766"} Nov 24 18:30:02 crc kubenswrapper[4768]: I1124 18:30:02.972045 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:03 crc kubenswrapper[4768]: I1124 18:30:03.125526 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4455r\" (UniqueName: \"kubernetes.io/projected/620048ac-f859-4df2-bc3a-7111daa9db60-kube-api-access-4455r\") pod \"620048ac-f859-4df2-bc3a-7111daa9db60\" (UID: \"620048ac-f859-4df2-bc3a-7111daa9db60\") " Nov 24 18:30:03 crc kubenswrapper[4768]: I1124 18:30:03.125665 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/620048ac-f859-4df2-bc3a-7111daa9db60-secret-volume\") pod \"620048ac-f859-4df2-bc3a-7111daa9db60\" (UID: \"620048ac-f859-4df2-bc3a-7111daa9db60\") " Nov 24 18:30:03 crc kubenswrapper[4768]: I1124 18:30:03.126964 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/620048ac-f859-4df2-bc3a-7111daa9db60-config-volume\") pod \"620048ac-f859-4df2-bc3a-7111daa9db60\" (UID: \"620048ac-f859-4df2-bc3a-7111daa9db60\") " Nov 24 18:30:03 crc kubenswrapper[4768]: I1124 18:30:03.129065 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/620048ac-f859-4df2-bc3a-7111daa9db60-config-volume" (OuterVolumeSpecName: "config-volume") pod "620048ac-f859-4df2-bc3a-7111daa9db60" (UID: "620048ac-f859-4df2-bc3a-7111daa9db60"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:30:03 crc kubenswrapper[4768]: I1124 18:30:03.134279 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620048ac-f859-4df2-bc3a-7111daa9db60-kube-api-access-4455r" (OuterVolumeSpecName: "kube-api-access-4455r") pod "620048ac-f859-4df2-bc3a-7111daa9db60" (UID: "620048ac-f859-4df2-bc3a-7111daa9db60"). InnerVolumeSpecName "kube-api-access-4455r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:30:03 crc kubenswrapper[4768]: I1124 18:30:03.134475 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620048ac-f859-4df2-bc3a-7111daa9db60-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "620048ac-f859-4df2-bc3a-7111daa9db60" (UID: "620048ac-f859-4df2-bc3a-7111daa9db60"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:30:03 crc kubenswrapper[4768]: I1124 18:30:03.231503 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/620048ac-f859-4df2-bc3a-7111daa9db60-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:03 crc kubenswrapper[4768]: I1124 18:30:03.231546 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4455r\" (UniqueName: \"kubernetes.io/projected/620048ac-f859-4df2-bc3a-7111daa9db60-kube-api-access-4455r\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:03 crc kubenswrapper[4768]: I1124 18:30:03.231563 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/620048ac-f859-4df2-bc3a-7111daa9db60-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:03 crc kubenswrapper[4768]: I1124 18:30:03.653150 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" event={"ID":"620048ac-f859-4df2-bc3a-7111daa9db60","Type":"ContainerDied","Data":"cac3ca26f321eba86337aa491a4b836132c6c85bded64f0867d0a9dcdc40d766"} Nov 24 18:30:03 crc kubenswrapper[4768]: I1124 18:30:03.653755 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cac3ca26f321eba86337aa491a4b836132c6c85bded64f0867d0a9dcdc40d766" Nov 24 18:30:03 crc kubenswrapper[4768]: I1124 18:30:03.653190 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400150-9h7qx" Nov 24 18:30:04 crc kubenswrapper[4768]: I1124 18:30:04.065818 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q"] Nov 24 18:30:04 crc kubenswrapper[4768]: I1124 18:30:04.073255 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400105-t4h2q"] Nov 24 18:30:04 crc kubenswrapper[4768]: I1124 18:30:04.669902 4768 generic.go:334] "Generic (PLEG): container finished" podID="0ca0ce9c-abe8-49c5-9aed-d63e4bae7811" containerID="4c5cd3bc882ec07f5ee2de5f09423ffca2d144641b2056fb349bfc0c5ab4d615" exitCode=0 Nov 24 18:30:04 crc kubenswrapper[4768]: I1124 18:30:04.670001 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" event={"ID":"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811","Type":"ContainerDied","Data":"4c5cd3bc882ec07f5ee2de5f09423ffca2d144641b2056fb349bfc0c5ab4d615"} Nov 24 18:30:05 crc kubenswrapper[4768]: I1124 18:30:05.916112 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b824dba7-d50a-4972-ba6f-49ee0fb30604" path="/var/lib/kubelet/pods/b824dba7-d50a-4972-ba6f-49ee0fb30604/volumes" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.103171 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.198167 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbjbz\" (UniqueName: \"kubernetes.io/projected/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-kube-api-access-xbjbz\") pod \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.198302 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-inventory\") pod \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.198653 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-ssh-key\") pod \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.198792 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-ceph\") pod \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\" (UID: \"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811\") " Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.206234 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-kube-api-access-xbjbz" (OuterVolumeSpecName: "kube-api-access-xbjbz") pod "0ca0ce9c-abe8-49c5-9aed-d63e4bae7811" (UID: "0ca0ce9c-abe8-49c5-9aed-d63e4bae7811"). InnerVolumeSpecName "kube-api-access-xbjbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.206370 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-ceph" (OuterVolumeSpecName: "ceph") pod "0ca0ce9c-abe8-49c5-9aed-d63e4bae7811" (UID: "0ca0ce9c-abe8-49c5-9aed-d63e4bae7811"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.231393 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-inventory" (OuterVolumeSpecName: "inventory") pod "0ca0ce9c-abe8-49c5-9aed-d63e4bae7811" (UID: "0ca0ce9c-abe8-49c5-9aed-d63e4bae7811"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.231690 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0ca0ce9c-abe8-49c5-9aed-d63e4bae7811" (UID: "0ca0ce9c-abe8-49c5-9aed-d63e4bae7811"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.301603 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.301646 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.301666 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbjbz\" (UniqueName: \"kubernetes.io/projected/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-kube-api-access-xbjbz\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.301680 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ca0ce9c-abe8-49c5-9aed-d63e4bae7811-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.689439 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" event={"ID":"0ca0ce9c-abe8-49c5-9aed-d63e4bae7811","Type":"ContainerDied","Data":"88a31b7fe3e7d8d57f365fed369ec17e6acfef582cd7988a8c4daa87c309a0b0"} Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.690039 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88a31b7fe3e7d8d57f365fed369ec17e6acfef582cd7988a8c4daa87c309a0b0" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.689557 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-84jr8" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.804318 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m"] Nov 24 18:30:06 crc kubenswrapper[4768]: E1124 18:30:06.804829 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620048ac-f859-4df2-bc3a-7111daa9db60" containerName="collect-profiles" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.804850 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="620048ac-f859-4df2-bc3a-7111daa9db60" containerName="collect-profiles" Nov 24 18:30:06 crc kubenswrapper[4768]: E1124 18:30:06.804886 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca0ce9c-abe8-49c5-9aed-d63e4bae7811" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.804897 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca0ce9c-abe8-49c5-9aed-d63e4bae7811" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.805119 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca0ce9c-abe8-49c5-9aed-d63e4bae7811" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.805132 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="620048ac-f859-4df2-bc3a-7111daa9db60" containerName="collect-profiles" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.806009 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.809891 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.814759 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.815083 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.815256 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.816084 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.824409 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m"] Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.914204 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8th9m\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.914274 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mkb9\" (UniqueName: \"kubernetes.io/projected/afb6ccb1-e75a-470b-9755-a3359c7d23fd-kube-api-access-9mkb9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8th9m\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.914328 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8th9m\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:06 crc kubenswrapper[4768]: I1124 18:30:06.914541 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8th9m\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:07 crc kubenswrapper[4768]: I1124 18:30:07.018388 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8th9m\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:07 crc kubenswrapper[4768]: I1124 18:30:07.018463 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mkb9\" (UniqueName: \"kubernetes.io/projected/afb6ccb1-e75a-470b-9755-a3359c7d23fd-kube-api-access-9mkb9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8th9m\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:07 crc kubenswrapper[4768]: I1124 18:30:07.018523 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8th9m\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:07 crc kubenswrapper[4768]: I1124 18:30:07.018640 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8th9m\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:07 crc kubenswrapper[4768]: I1124 18:30:07.024292 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8th9m\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:07 crc kubenswrapper[4768]: I1124 18:30:07.024837 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8th9m\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:07 crc kubenswrapper[4768]: I1124 18:30:07.025332 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8th9m\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:07 crc kubenswrapper[4768]: I1124 18:30:07.036557 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mkb9\" (UniqueName: \"kubernetes.io/projected/afb6ccb1-e75a-470b-9755-a3359c7d23fd-kube-api-access-9mkb9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8th9m\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:07 crc kubenswrapper[4768]: I1124 18:30:07.124703 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:07 crc kubenswrapper[4768]: I1124 18:30:07.679404 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m"] Nov 24 18:30:07 crc kubenswrapper[4768]: W1124 18:30:07.681634 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafb6ccb1_e75a_470b_9755_a3359c7d23fd.slice/crio-79653574cd6f5862a8ac96e2f9b4af0d2798567e2e645138e77451841532bbe2 WatchSource:0}: Error finding container 79653574cd6f5862a8ac96e2f9b4af0d2798567e2e645138e77451841532bbe2: Status 404 returned error can't find the container with id 79653574cd6f5862a8ac96e2f9b4af0d2798567e2e645138e77451841532bbe2 Nov 24 18:30:07 crc kubenswrapper[4768]: I1124 18:30:07.698137 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" event={"ID":"afb6ccb1-e75a-470b-9755-a3359c7d23fd","Type":"ContainerStarted","Data":"79653574cd6f5862a8ac96e2f9b4af0d2798567e2e645138e77451841532bbe2"} Nov 24 18:30:08 crc kubenswrapper[4768]: I1124 18:30:08.708474 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" event={"ID":"afb6ccb1-e75a-470b-9755-a3359c7d23fd","Type":"ContainerStarted","Data":"cdd229d0cabe467aaf3f755f92ddf55e2df8b3288e318e02ddf95784aa8736fc"} Nov 24 18:30:08 crc kubenswrapper[4768]: I1124 18:30:08.740414 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" podStartSLOduration=2.31667883 podStartE2EDuration="2.740385162s" podCreationTimestamp="2025-11-24 18:30:06 +0000 UTC" firstStartedPulling="2025-11-24 18:30:07.68451374 +0000 UTC m=+2446.545095517" lastFinishedPulling="2025-11-24 18:30:08.108220072 +0000 UTC m=+2446.968801849" observedRunningTime="2025-11-24 18:30:08.73225504 +0000 UTC m=+2447.592836857" watchObservedRunningTime="2025-11-24 18:30:08.740385162 +0000 UTC m=+2447.600966939" Nov 24 18:30:12 crc kubenswrapper[4768]: I1124 18:30:12.899438 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:30:12 crc kubenswrapper[4768]: E1124 18:30:12.900267 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:30:27 crc kubenswrapper[4768]: I1124 18:30:27.898860 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:30:27 crc kubenswrapper[4768]: E1124 18:30:27.899903 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:30:38 crc kubenswrapper[4768]: I1124 18:30:38.903859 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:30:38 crc kubenswrapper[4768]: E1124 18:30:38.905567 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:30:43 crc kubenswrapper[4768]: I1124 18:30:43.044196 4768 generic.go:334] "Generic (PLEG): container finished" podID="afb6ccb1-e75a-470b-9755-a3359c7d23fd" containerID="cdd229d0cabe467aaf3f755f92ddf55e2df8b3288e318e02ddf95784aa8736fc" exitCode=0 Nov 24 18:30:43 crc kubenswrapper[4768]: I1124 18:30:43.044288 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" event={"ID":"afb6ccb1-e75a-470b-9755-a3359c7d23fd","Type":"ContainerDied","Data":"cdd229d0cabe467aaf3f755f92ddf55e2df8b3288e318e02ddf95784aa8736fc"} Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.464838 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.600311 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mkb9\" (UniqueName: \"kubernetes.io/projected/afb6ccb1-e75a-470b-9755-a3359c7d23fd-kube-api-access-9mkb9\") pod \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.600579 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-inventory\") pod \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.600666 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-ceph\") pod \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.600795 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-ssh-key\") pod \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\" (UID: \"afb6ccb1-e75a-470b-9755-a3359c7d23fd\") " Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.606465 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-ceph" (OuterVolumeSpecName: "ceph") pod "afb6ccb1-e75a-470b-9755-a3359c7d23fd" (UID: "afb6ccb1-e75a-470b-9755-a3359c7d23fd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.606863 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb6ccb1-e75a-470b-9755-a3359c7d23fd-kube-api-access-9mkb9" (OuterVolumeSpecName: "kube-api-access-9mkb9") pod "afb6ccb1-e75a-470b-9755-a3359c7d23fd" (UID: "afb6ccb1-e75a-470b-9755-a3359c7d23fd"). InnerVolumeSpecName "kube-api-access-9mkb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.625939 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "afb6ccb1-e75a-470b-9755-a3359c7d23fd" (UID: "afb6ccb1-e75a-470b-9755-a3359c7d23fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.626269 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-inventory" (OuterVolumeSpecName: "inventory") pod "afb6ccb1-e75a-470b-9755-a3359c7d23fd" (UID: "afb6ccb1-e75a-470b-9755-a3359c7d23fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.702781 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.702855 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mkb9\" (UniqueName: \"kubernetes.io/projected/afb6ccb1-e75a-470b-9755-a3359c7d23fd-kube-api-access-9mkb9\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.702870 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:44 crc kubenswrapper[4768]: I1124 18:30:44.702881 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/afb6ccb1-e75a-470b-9755-a3359c7d23fd-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.067217 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.067627 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8th9m" event={"ID":"afb6ccb1-e75a-470b-9755-a3359c7d23fd","Type":"ContainerDied","Data":"79653574cd6f5862a8ac96e2f9b4af0d2798567e2e645138e77451841532bbe2"} Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.067711 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79653574cd6f5862a8ac96e2f9b4af0d2798567e2e645138e77451841532bbe2" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.177071 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf"] Nov 24 18:30:45 crc kubenswrapper[4768]: E1124 18:30:45.177629 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb6ccb1-e75a-470b-9755-a3359c7d23fd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.177652 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb6ccb1-e75a-470b-9755-a3359c7d23fd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.177894 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb6ccb1-e75a-470b-9755-a3359c7d23fd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.178695 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.181820 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.182234 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.182430 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.182599 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.182768 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.194729 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf"] Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.315662 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6m4q\" (UniqueName: \"kubernetes.io/projected/2d65345f-930f-4b71-9968-a613d7c11a33-kube-api-access-v6m4q\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.316044 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.316241 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.316290 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.419146 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.419228 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.419286 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6m4q\" (UniqueName: \"kubernetes.io/projected/2d65345f-930f-4b71-9968-a613d7c11a33-kube-api-access-v6m4q\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.419408 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.425687 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.425767 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.425725 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.438532 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6m4q\" (UniqueName: \"kubernetes.io/projected/2d65345f-930f-4b71-9968-a613d7c11a33-kube-api-access-v6m4q\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:45 crc kubenswrapper[4768]: I1124 18:30:45.499906 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:46 crc kubenswrapper[4768]: I1124 18:30:46.060922 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf"] Nov 24 18:30:46 crc kubenswrapper[4768]: I1124 18:30:46.079374 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" event={"ID":"2d65345f-930f-4b71-9968-a613d7c11a33","Type":"ContainerStarted","Data":"c125115fb03c409734318cb9cc8b9aee7960408c3c92fca98538fa8777827c78"} Nov 24 18:30:47 crc kubenswrapper[4768]: I1124 18:30:47.089021 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" event={"ID":"2d65345f-930f-4b71-9968-a613d7c11a33","Type":"ContainerStarted","Data":"bcd9ca87d29bb156073b0f0350f28e260536d0bd52d01c3aac88fc9846e9dd8e"} Nov 24 18:30:47 crc kubenswrapper[4768]: I1124 18:30:47.106358 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" podStartSLOduration=1.579588411 podStartE2EDuration="2.10633507s" podCreationTimestamp="2025-11-24 18:30:45 +0000 UTC" firstStartedPulling="2025-11-24 18:30:46.072339395 +0000 UTC m=+2484.932921172" lastFinishedPulling="2025-11-24 18:30:46.599086014 +0000 UTC m=+2485.459667831" observedRunningTime="2025-11-24 18:30:47.10561659 +0000 UTC m=+2485.966198377" watchObservedRunningTime="2025-11-24 18:30:47.10633507 +0000 UTC m=+2485.966916847" Nov 24 18:30:49 crc kubenswrapper[4768]: I1124 18:30:49.589143 4768 scope.go:117] "RemoveContainer" containerID="7d0d31770b074427d01065c4f9c8c516cc2dd52adaa5af03c58fc78b329a97c8" Nov 24 18:30:49 crc kubenswrapper[4768]: I1124 18:30:49.898707 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:30:49 crc kubenswrapper[4768]: E1124 18:30:49.899676 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:30:51 crc kubenswrapper[4768]: I1124 18:30:51.133746 4768 generic.go:334] "Generic (PLEG): container finished" podID="2d65345f-930f-4b71-9968-a613d7c11a33" containerID="bcd9ca87d29bb156073b0f0350f28e260536d0bd52d01c3aac88fc9846e9dd8e" exitCode=0 Nov 24 18:30:51 crc kubenswrapper[4768]: I1124 18:30:51.133847 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" event={"ID":"2d65345f-930f-4b71-9968-a613d7c11a33","Type":"ContainerDied","Data":"bcd9ca87d29bb156073b0f0350f28e260536d0bd52d01c3aac88fc9846e9dd8e"} Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.571906 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.665716 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6m4q\" (UniqueName: \"kubernetes.io/projected/2d65345f-930f-4b71-9968-a613d7c11a33-kube-api-access-v6m4q\") pod \"2d65345f-930f-4b71-9968-a613d7c11a33\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.665823 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-inventory\") pod \"2d65345f-930f-4b71-9968-a613d7c11a33\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.665907 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-ssh-key\") pod \"2d65345f-930f-4b71-9968-a613d7c11a33\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.666063 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-ceph\") pod \"2d65345f-930f-4b71-9968-a613d7c11a33\" (UID: \"2d65345f-930f-4b71-9968-a613d7c11a33\") " Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.672277 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d65345f-930f-4b71-9968-a613d7c11a33-kube-api-access-v6m4q" (OuterVolumeSpecName: "kube-api-access-v6m4q") pod "2d65345f-930f-4b71-9968-a613d7c11a33" (UID: "2d65345f-930f-4b71-9968-a613d7c11a33"). InnerVolumeSpecName "kube-api-access-v6m4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.689012 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-ceph" (OuterVolumeSpecName: "ceph") pod "2d65345f-930f-4b71-9968-a613d7c11a33" (UID: "2d65345f-930f-4b71-9968-a613d7c11a33"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.703037 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-inventory" (OuterVolumeSpecName: "inventory") pod "2d65345f-930f-4b71-9968-a613d7c11a33" (UID: "2d65345f-930f-4b71-9968-a613d7c11a33"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.709970 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2d65345f-930f-4b71-9968-a613d7c11a33" (UID: "2d65345f-930f-4b71-9968-a613d7c11a33"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.768704 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6m4q\" (UniqueName: \"kubernetes.io/projected/2d65345f-930f-4b71-9968-a613d7c11a33-kube-api-access-v6m4q\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.768738 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.768749 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:52 crc kubenswrapper[4768]: I1124 18:30:52.768758 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2d65345f-930f-4b71-9968-a613d7c11a33-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.161474 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" event={"ID":"2d65345f-930f-4b71-9968-a613d7c11a33","Type":"ContainerDied","Data":"c125115fb03c409734318cb9cc8b9aee7960408c3c92fca98538fa8777827c78"} Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.161550 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.161560 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c125115fb03c409734318cb9cc8b9aee7960408c3c92fca98538fa8777827c78" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.235319 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz"] Nov 24 18:30:53 crc kubenswrapper[4768]: E1124 18:30:53.235753 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d65345f-930f-4b71-9968-a613d7c11a33" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.235776 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d65345f-930f-4b71-9968-a613d7c11a33" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.235965 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d65345f-930f-4b71-9968-a613d7c11a33" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.236723 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.238652 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.238899 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.238930 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.243692 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.244423 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.246686 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz"] Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.275420 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.275483 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd57d\" (UniqueName: \"kubernetes.io/projected/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-kube-api-access-wd57d\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.275568 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.275592 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.376730 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.376789 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd57d\" (UniqueName: \"kubernetes.io/projected/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-kube-api-access-wd57d\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.376846 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.376862 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.382346 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.382709 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.383137 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.393688 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd57d\" (UniqueName: \"kubernetes.io/projected/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-kube-api-access-wd57d\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:53 crc kubenswrapper[4768]: I1124 18:30:53.570868 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.046325 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz"] Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.172421 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" event={"ID":"cdd7e3c1-531f-4b9b-99bb-057c5078cf95","Type":"ContainerStarted","Data":"78487df1fa07839a61c26a9ce08242a0cc548d18a4c8eb6bbf5307ed6b0a1535"} Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.750404 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6vz4z"] Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.753468 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.764956 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vz4z"] Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.809653 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-utilities\") pod \"redhat-marketplace-6vz4z\" (UID: \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\") " pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.810041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56pck\" (UniqueName: \"kubernetes.io/projected/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-kube-api-access-56pck\") pod \"redhat-marketplace-6vz4z\" (UID: \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\") " pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.810069 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-catalog-content\") pod \"redhat-marketplace-6vz4z\" (UID: \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\") " pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.911697 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-utilities\") pod \"redhat-marketplace-6vz4z\" (UID: \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\") " pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.911841 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56pck\" (UniqueName: \"kubernetes.io/projected/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-kube-api-access-56pck\") pod \"redhat-marketplace-6vz4z\" (UID: \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\") " pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.911890 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-catalog-content\") pod \"redhat-marketplace-6vz4z\" (UID: \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\") " pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.912246 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-utilities\") pod \"redhat-marketplace-6vz4z\" (UID: \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\") " pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.912440 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-catalog-content\") pod \"redhat-marketplace-6vz4z\" (UID: \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\") " pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:30:54 crc kubenswrapper[4768]: I1124 18:30:54.933442 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56pck\" (UniqueName: \"kubernetes.io/projected/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-kube-api-access-56pck\") pod \"redhat-marketplace-6vz4z\" (UID: \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\") " pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:30:55 crc kubenswrapper[4768]: I1124 18:30:55.084176 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:30:55 crc kubenswrapper[4768]: I1124 18:30:55.184941 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" event={"ID":"cdd7e3c1-531f-4b9b-99bb-057c5078cf95","Type":"ContainerStarted","Data":"c102572880c56ff96e36787169e778a39c894893c193ea2efb27641b25ff88a3"} Nov 24 18:30:55 crc kubenswrapper[4768]: I1124 18:30:55.221679 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" podStartSLOduration=1.808760025 podStartE2EDuration="2.221647692s" podCreationTimestamp="2025-11-24 18:30:53 +0000 UTC" firstStartedPulling="2025-11-24 18:30:54.05311146 +0000 UTC m=+2492.913693237" lastFinishedPulling="2025-11-24 18:30:54.465999127 +0000 UTC m=+2493.326580904" observedRunningTime="2025-11-24 18:30:55.202641693 +0000 UTC m=+2494.063223470" watchObservedRunningTime="2025-11-24 18:30:55.221647692 +0000 UTC m=+2494.082229489" Nov 24 18:30:55 crc kubenswrapper[4768]: I1124 18:30:55.530582 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vz4z"] Nov 24 18:30:56 crc kubenswrapper[4768]: I1124 18:30:56.199756 4768 generic.go:334] "Generic (PLEG): container finished" podID="e9b5bef8-1bea-4488-9d77-d311c32ea2ef" containerID="d12cc021eb292e3deb3d163b69d000c198c4abe7394bc2c3c8c628e19dd58d80" exitCode=0 Nov 24 18:30:56 crc kubenswrapper[4768]: I1124 18:30:56.199871 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vz4z" event={"ID":"e9b5bef8-1bea-4488-9d77-d311c32ea2ef","Type":"ContainerDied","Data":"d12cc021eb292e3deb3d163b69d000c198c4abe7394bc2c3c8c628e19dd58d80"} Nov 24 18:30:56 crc kubenswrapper[4768]: I1124 18:30:56.200095 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vz4z" event={"ID":"e9b5bef8-1bea-4488-9d77-d311c32ea2ef","Type":"ContainerStarted","Data":"84987f6421959a9fd357683f743f59aa050c2bebbb00975e041486104f612baa"} Nov 24 18:30:57 crc kubenswrapper[4768]: I1124 18:30:57.211314 4768 generic.go:334] "Generic (PLEG): container finished" podID="e9b5bef8-1bea-4488-9d77-d311c32ea2ef" containerID="daaefd2e4ba3ad4fb52263b0fcdd9fac1d00d66f09527d8384dc9ebe296ec57f" exitCode=0 Nov 24 18:30:57 crc kubenswrapper[4768]: I1124 18:30:57.211391 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vz4z" event={"ID":"e9b5bef8-1bea-4488-9d77-d311c32ea2ef","Type":"ContainerDied","Data":"daaefd2e4ba3ad4fb52263b0fcdd9fac1d00d66f09527d8384dc9ebe296ec57f"} Nov 24 18:30:58 crc kubenswrapper[4768]: I1124 18:30:58.222998 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vz4z" event={"ID":"e9b5bef8-1bea-4488-9d77-d311c32ea2ef","Type":"ContainerStarted","Data":"f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18"} Nov 24 18:31:01 crc kubenswrapper[4768]: I1124 18:31:01.906106 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:31:01 crc kubenswrapper[4768]: E1124 18:31:01.907590 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:31:05 crc kubenswrapper[4768]: I1124 18:31:05.085235 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:31:05 crc kubenswrapper[4768]: I1124 18:31:05.087418 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:31:05 crc kubenswrapper[4768]: I1124 18:31:05.145628 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:31:05 crc kubenswrapper[4768]: I1124 18:31:05.171017 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6vz4z" podStartSLOduration=9.711037900000001 podStartE2EDuration="11.170974668s" podCreationTimestamp="2025-11-24 18:30:54 +0000 UTC" firstStartedPulling="2025-11-24 18:30:56.202418212 +0000 UTC m=+2495.063000009" lastFinishedPulling="2025-11-24 18:30:57.66235497 +0000 UTC m=+2496.522936777" observedRunningTime="2025-11-24 18:30:58.24514543 +0000 UTC m=+2497.105727217" watchObservedRunningTime="2025-11-24 18:31:05.170974668 +0000 UTC m=+2504.031556445" Nov 24 18:31:05 crc kubenswrapper[4768]: I1124 18:31:05.342279 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:31:05 crc kubenswrapper[4768]: I1124 18:31:05.390847 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vz4z"] Nov 24 18:31:07 crc kubenswrapper[4768]: I1124 18:31:07.296785 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6vz4z" podUID="e9b5bef8-1bea-4488-9d77-d311c32ea2ef" containerName="registry-server" containerID="cri-o://f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18" gracePeriod=2 Nov 24 18:31:07 crc kubenswrapper[4768]: I1124 18:31:07.766481 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:31:07 crc kubenswrapper[4768]: I1124 18:31:07.871435 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-utilities\") pod \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\" (UID: \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\") " Nov 24 18:31:07 crc kubenswrapper[4768]: I1124 18:31:07.871649 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-catalog-content\") pod \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\" (UID: \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\") " Nov 24 18:31:07 crc kubenswrapper[4768]: I1124 18:31:07.871792 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56pck\" (UniqueName: \"kubernetes.io/projected/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-kube-api-access-56pck\") pod \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\" (UID: \"e9b5bef8-1bea-4488-9d77-d311c32ea2ef\") " Nov 24 18:31:07 crc kubenswrapper[4768]: I1124 18:31:07.873139 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-utilities" (OuterVolumeSpecName: "utilities") pod "e9b5bef8-1bea-4488-9d77-d311c32ea2ef" (UID: "e9b5bef8-1bea-4488-9d77-d311c32ea2ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:31:07 crc kubenswrapper[4768]: I1124 18:31:07.877406 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-kube-api-access-56pck" (OuterVolumeSpecName: "kube-api-access-56pck") pod "e9b5bef8-1bea-4488-9d77-d311c32ea2ef" (UID: "e9b5bef8-1bea-4488-9d77-d311c32ea2ef"). InnerVolumeSpecName "kube-api-access-56pck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:31:07 crc kubenswrapper[4768]: I1124 18:31:07.896330 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9b5bef8-1bea-4488-9d77-d311c32ea2ef" (UID: "e9b5bef8-1bea-4488-9d77-d311c32ea2ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:31:07 crc kubenswrapper[4768]: I1124 18:31:07.974688 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56pck\" (UniqueName: \"kubernetes.io/projected/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-kube-api-access-56pck\") on node \"crc\" DevicePath \"\"" Nov 24 18:31:07 crc kubenswrapper[4768]: I1124 18:31:07.974730 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:31:07 crc kubenswrapper[4768]: I1124 18:31:07.974744 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b5bef8-1bea-4488-9d77-d311c32ea2ef-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.305941 4768 generic.go:334] "Generic (PLEG): container finished" podID="e9b5bef8-1bea-4488-9d77-d311c32ea2ef" containerID="f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18" exitCode=0 Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.305985 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vz4z" event={"ID":"e9b5bef8-1bea-4488-9d77-d311c32ea2ef","Type":"ContainerDied","Data":"f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18"} Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.306013 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vz4z" event={"ID":"e9b5bef8-1bea-4488-9d77-d311c32ea2ef","Type":"ContainerDied","Data":"84987f6421959a9fd357683f743f59aa050c2bebbb00975e041486104f612baa"} Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.306034 4768 scope.go:117] "RemoveContainer" containerID="f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18" Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.306044 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vz4z" Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.341447 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vz4z"] Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.349145 4768 scope.go:117] "RemoveContainer" containerID="daaefd2e4ba3ad4fb52263b0fcdd9fac1d00d66f09527d8384dc9ebe296ec57f" Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.352953 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vz4z"] Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.367583 4768 scope.go:117] "RemoveContainer" containerID="d12cc021eb292e3deb3d163b69d000c198c4abe7394bc2c3c8c628e19dd58d80" Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.406388 4768 scope.go:117] "RemoveContainer" containerID="f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18" Nov 24 18:31:08 crc kubenswrapper[4768]: E1124 18:31:08.406872 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18\": container with ID starting with f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18 not found: ID does not exist" containerID="f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18" Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.406941 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18"} err="failed to get container status \"f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18\": rpc error: code = NotFound desc = could not find container \"f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18\": container with ID starting with f626ffa85749f0efa0bf68b6a8d52971eec4df978983d28296445c16d31a1d18 not found: ID does not exist" Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.406996 4768 scope.go:117] "RemoveContainer" containerID="daaefd2e4ba3ad4fb52263b0fcdd9fac1d00d66f09527d8384dc9ebe296ec57f" Nov 24 18:31:08 crc kubenswrapper[4768]: E1124 18:31:08.408164 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daaefd2e4ba3ad4fb52263b0fcdd9fac1d00d66f09527d8384dc9ebe296ec57f\": container with ID starting with daaefd2e4ba3ad4fb52263b0fcdd9fac1d00d66f09527d8384dc9ebe296ec57f not found: ID does not exist" containerID="daaefd2e4ba3ad4fb52263b0fcdd9fac1d00d66f09527d8384dc9ebe296ec57f" Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.408198 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daaefd2e4ba3ad4fb52263b0fcdd9fac1d00d66f09527d8384dc9ebe296ec57f"} err="failed to get container status \"daaefd2e4ba3ad4fb52263b0fcdd9fac1d00d66f09527d8384dc9ebe296ec57f\": rpc error: code = NotFound desc = could not find container \"daaefd2e4ba3ad4fb52263b0fcdd9fac1d00d66f09527d8384dc9ebe296ec57f\": container with ID starting with daaefd2e4ba3ad4fb52263b0fcdd9fac1d00d66f09527d8384dc9ebe296ec57f not found: ID does not exist" Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.408225 4768 scope.go:117] "RemoveContainer" containerID="d12cc021eb292e3deb3d163b69d000c198c4abe7394bc2c3c8c628e19dd58d80" Nov 24 18:31:08 crc kubenswrapper[4768]: E1124 18:31:08.408676 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d12cc021eb292e3deb3d163b69d000c198c4abe7394bc2c3c8c628e19dd58d80\": container with ID starting with d12cc021eb292e3deb3d163b69d000c198c4abe7394bc2c3c8c628e19dd58d80 not found: ID does not exist" containerID="d12cc021eb292e3deb3d163b69d000c198c4abe7394bc2c3c8c628e19dd58d80" Nov 24 18:31:08 crc kubenswrapper[4768]: I1124 18:31:08.408711 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d12cc021eb292e3deb3d163b69d000c198c4abe7394bc2c3c8c628e19dd58d80"} err="failed to get container status \"d12cc021eb292e3deb3d163b69d000c198c4abe7394bc2c3c8c628e19dd58d80\": rpc error: code = NotFound desc = could not find container \"d12cc021eb292e3deb3d163b69d000c198c4abe7394bc2c3c8c628e19dd58d80\": container with ID starting with d12cc021eb292e3deb3d163b69d000c198c4abe7394bc2c3c8c628e19dd58d80 not found: ID does not exist" Nov 24 18:31:09 crc kubenswrapper[4768]: I1124 18:31:09.911592 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b5bef8-1bea-4488-9d77-d311c32ea2ef" path="/var/lib/kubelet/pods/e9b5bef8-1bea-4488-9d77-d311c32ea2ef/volumes" Nov 24 18:31:14 crc kubenswrapper[4768]: I1124 18:31:14.899042 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:31:14 crc kubenswrapper[4768]: E1124 18:31:14.900056 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:31:25 crc kubenswrapper[4768]: I1124 18:31:25.899391 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:31:25 crc kubenswrapper[4768]: E1124 18:31:25.900375 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:31:37 crc kubenswrapper[4768]: I1124 18:31:37.595710 4768 generic.go:334] "Generic (PLEG): container finished" podID="cdd7e3c1-531f-4b9b-99bb-057c5078cf95" containerID="c102572880c56ff96e36787169e778a39c894893c193ea2efb27641b25ff88a3" exitCode=0 Nov 24 18:31:37 crc kubenswrapper[4768]: I1124 18:31:37.595870 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" event={"ID":"cdd7e3c1-531f-4b9b-99bb-057c5078cf95","Type":"ContainerDied","Data":"c102572880c56ff96e36787169e778a39c894893c193ea2efb27641b25ff88a3"} Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.153073 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.338099 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-ceph\") pod \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.338267 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-inventory\") pod \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.338347 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-ssh-key\") pod \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.338580 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd57d\" (UniqueName: \"kubernetes.io/projected/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-kube-api-access-wd57d\") pod \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\" (UID: \"cdd7e3c1-531f-4b9b-99bb-057c5078cf95\") " Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.346306 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-ceph" (OuterVolumeSpecName: "ceph") pod "cdd7e3c1-531f-4b9b-99bb-057c5078cf95" (UID: "cdd7e3c1-531f-4b9b-99bb-057c5078cf95"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.346682 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-kube-api-access-wd57d" (OuterVolumeSpecName: "kube-api-access-wd57d") pod "cdd7e3c1-531f-4b9b-99bb-057c5078cf95" (UID: "cdd7e3c1-531f-4b9b-99bb-057c5078cf95"). InnerVolumeSpecName "kube-api-access-wd57d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.384063 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-inventory" (OuterVolumeSpecName: "inventory") pod "cdd7e3c1-531f-4b9b-99bb-057c5078cf95" (UID: "cdd7e3c1-531f-4b9b-99bb-057c5078cf95"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.401097 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cdd7e3c1-531f-4b9b-99bb-057c5078cf95" (UID: "cdd7e3c1-531f-4b9b-99bb-057c5078cf95"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.441715 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.441768 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.441783 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.441795 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd57d\" (UniqueName: \"kubernetes.io/projected/cdd7e3c1-531f-4b9b-99bb-057c5078cf95-kube-api-access-wd57d\") on node \"crc\" DevicePath \"\"" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.621095 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" event={"ID":"cdd7e3c1-531f-4b9b-99bb-057c5078cf95","Type":"ContainerDied","Data":"78487df1fa07839a61c26a9ce08242a0cc548d18a4c8eb6bbf5307ed6b0a1535"} Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.621137 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78487df1fa07839a61c26a9ce08242a0cc548d18a4c8eb6bbf5307ed6b0a1535" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.621231 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.736840 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-l4ldg"] Nov 24 18:31:39 crc kubenswrapper[4768]: E1124 18:31:39.737212 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b5bef8-1bea-4488-9d77-d311c32ea2ef" containerName="extract-content" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.737228 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b5bef8-1bea-4488-9d77-d311c32ea2ef" containerName="extract-content" Nov 24 18:31:39 crc kubenswrapper[4768]: E1124 18:31:39.737241 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd7e3c1-531f-4b9b-99bb-057c5078cf95" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.737248 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd7e3c1-531f-4b9b-99bb-057c5078cf95" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:31:39 crc kubenswrapper[4768]: E1124 18:31:39.737268 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b5bef8-1bea-4488-9d77-d311c32ea2ef" containerName="registry-server" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.737277 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b5bef8-1bea-4488-9d77-d311c32ea2ef" containerName="registry-server" Nov 24 18:31:39 crc kubenswrapper[4768]: E1124 18:31:39.737288 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b5bef8-1bea-4488-9d77-d311c32ea2ef" containerName="extract-utilities" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.737294 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b5bef8-1bea-4488-9d77-d311c32ea2ef" containerName="extract-utilities" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.737465 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b5bef8-1bea-4488-9d77-d311c32ea2ef" containerName="registry-server" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.737499 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd7e3c1-531f-4b9b-99bb-057c5078cf95" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.738109 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.741265 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.741855 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.741873 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.741872 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.741925 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.755139 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-l4ldg"] Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.848533 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-ceph\") pod \"ssh-known-hosts-edpm-deployment-l4ldg\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.848989 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-l4ldg\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.849142 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-l4ldg\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.849184 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcms9\" (UniqueName: \"kubernetes.io/projected/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-kube-api-access-mcms9\") pod \"ssh-known-hosts-edpm-deployment-l4ldg\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.952611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-l4ldg\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.953082 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-l4ldg\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.953135 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcms9\" (UniqueName: \"kubernetes.io/projected/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-kube-api-access-mcms9\") pod \"ssh-known-hosts-edpm-deployment-l4ldg\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.953321 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-ceph\") pod \"ssh-known-hosts-edpm-deployment-l4ldg\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.957528 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-l4ldg\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.957560 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-l4ldg\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.958573 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-ceph\") pod \"ssh-known-hosts-edpm-deployment-l4ldg\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:39 crc kubenswrapper[4768]: I1124 18:31:39.974810 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcms9\" (UniqueName: \"kubernetes.io/projected/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-kube-api-access-mcms9\") pod \"ssh-known-hosts-edpm-deployment-l4ldg\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:40 crc kubenswrapper[4768]: I1124 18:31:40.057139 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:40 crc kubenswrapper[4768]: I1124 18:31:40.359627 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-l4ldg"] Nov 24 18:31:40 crc kubenswrapper[4768]: I1124 18:31:40.633561 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" event={"ID":"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76","Type":"ContainerStarted","Data":"e82c6e20a05e16abc31df1c028b99dfb7cb2995ad771c7ffc80933eea70fd05c"} Nov 24 18:31:40 crc kubenswrapper[4768]: I1124 18:31:40.899466 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:31:40 crc kubenswrapper[4768]: E1124 18:31:40.900297 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:31:41 crc kubenswrapper[4768]: I1124 18:31:41.642964 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" event={"ID":"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76","Type":"ContainerStarted","Data":"d10e54db98d78451d230fdd860bf6f07d010237125f6a2f7d5c3735796566399"} Nov 24 18:31:41 crc kubenswrapper[4768]: I1124 18:31:41.667717 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" podStartSLOduration=2.140460752 podStartE2EDuration="2.667692769s" podCreationTimestamp="2025-11-24 18:31:39 +0000 UTC" firstStartedPulling="2025-11-24 18:31:40.372602165 +0000 UTC m=+2539.233183942" lastFinishedPulling="2025-11-24 18:31:40.899834142 +0000 UTC m=+2539.760415959" observedRunningTime="2025-11-24 18:31:41.659667162 +0000 UTC m=+2540.520248959" watchObservedRunningTime="2025-11-24 18:31:41.667692769 +0000 UTC m=+2540.528274556" Nov 24 18:31:50 crc kubenswrapper[4768]: I1124 18:31:50.737357 4768 generic.go:334] "Generic (PLEG): container finished" podID="e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76" containerID="d10e54db98d78451d230fdd860bf6f07d010237125f6a2f7d5c3735796566399" exitCode=0 Nov 24 18:31:50 crc kubenswrapper[4768]: I1124 18:31:50.737473 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" event={"ID":"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76","Type":"ContainerDied","Data":"d10e54db98d78451d230fdd860bf6f07d010237125f6a2f7d5c3735796566399"} Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.235188 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.397198 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-ceph\") pod \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.397283 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-ssh-key-openstack-edpm-ipam\") pod \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.397334 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-inventory-0\") pod \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.397378 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcms9\" (UniqueName: \"kubernetes.io/projected/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-kube-api-access-mcms9\") pod \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\" (UID: \"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76\") " Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.405801 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-kube-api-access-mcms9" (OuterVolumeSpecName: "kube-api-access-mcms9") pod "e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76" (UID: "e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76"). InnerVolumeSpecName "kube-api-access-mcms9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.407200 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-ceph" (OuterVolumeSpecName: "ceph") pod "e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76" (UID: "e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.427078 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76" (UID: "e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.445545 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76" (UID: "e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.500322 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcms9\" (UniqueName: \"kubernetes.io/projected/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-kube-api-access-mcms9\") on node \"crc\" DevicePath \"\"" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.500376 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.500407 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.500431 4768 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.763285 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" event={"ID":"e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76","Type":"ContainerDied","Data":"e82c6e20a05e16abc31df1c028b99dfb7cb2995ad771c7ffc80933eea70fd05c"} Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.763352 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e82c6e20a05e16abc31df1c028b99dfb7cb2995ad771c7ffc80933eea70fd05c" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.763361 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-l4ldg" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.851127 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb"] Nov 24 18:31:52 crc kubenswrapper[4768]: E1124 18:31:52.851644 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76" containerName="ssh-known-hosts-edpm-deployment" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.851671 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76" containerName="ssh-known-hosts-edpm-deployment" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.851926 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76" containerName="ssh-known-hosts-edpm-deployment" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.852685 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.855967 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.857222 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.862435 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.862475 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.862621 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:31:52 crc kubenswrapper[4768]: I1124 18:31:52.868699 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb"] Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.012732 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gdnxb\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.013110 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gdnxb\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.013358 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gdnxb\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.013396 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z7nx\" (UniqueName: \"kubernetes.io/projected/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-kube-api-access-7z7nx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gdnxb\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.115710 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gdnxb\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.116205 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z7nx\" (UniqueName: \"kubernetes.io/projected/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-kube-api-access-7z7nx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gdnxb\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.116337 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gdnxb\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.116457 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gdnxb\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.122982 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gdnxb\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.123254 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gdnxb\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.124663 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gdnxb\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.135895 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z7nx\" (UniqueName: \"kubernetes.io/projected/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-kube-api-access-7z7nx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gdnxb\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.172686 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.709817 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb"] Nov 24 18:31:53 crc kubenswrapper[4768]: I1124 18:31:53.774076 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" event={"ID":"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac","Type":"ContainerStarted","Data":"aa18e27acd440241f00ecaaad623a74e6c33fadb1de6995434907e274b7ba58c"} Nov 24 18:31:54 crc kubenswrapper[4768]: I1124 18:31:54.786921 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" event={"ID":"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac","Type":"ContainerStarted","Data":"7aa5534f697b3838d53d7e7bc354b5b78425479bb0a848ddf9acc3d5d87ff29f"} Nov 24 18:31:54 crc kubenswrapper[4768]: I1124 18:31:54.823033 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" podStartSLOduration=2.245633971 podStartE2EDuration="2.8229988s" podCreationTimestamp="2025-11-24 18:31:52 +0000 UTC" firstStartedPulling="2025-11-24 18:31:53.716112462 +0000 UTC m=+2552.576694239" lastFinishedPulling="2025-11-24 18:31:54.293477251 +0000 UTC m=+2553.154059068" observedRunningTime="2025-11-24 18:31:54.806565797 +0000 UTC m=+2553.667147614" watchObservedRunningTime="2025-11-24 18:31:54.8229988 +0000 UTC m=+2553.683580617" Nov 24 18:31:55 crc kubenswrapper[4768]: I1124 18:31:55.899345 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:31:55 crc kubenswrapper[4768]: E1124 18:31:55.900023 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:32:01 crc kubenswrapper[4768]: I1124 18:32:01.859620 4768 generic.go:334] "Generic (PLEG): container finished" podID="621b6bcf-7a5c-4a85-9a8f-379e95bad6ac" containerID="7aa5534f697b3838d53d7e7bc354b5b78425479bb0a848ddf9acc3d5d87ff29f" exitCode=0 Nov 24 18:32:01 crc kubenswrapper[4768]: I1124 18:32:01.859710 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" event={"ID":"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac","Type":"ContainerDied","Data":"7aa5534f697b3838d53d7e7bc354b5b78425479bb0a848ddf9acc3d5d87ff29f"} Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.347072 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.427514 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z7nx\" (UniqueName: \"kubernetes.io/projected/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-kube-api-access-7z7nx\") pod \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.427701 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-inventory\") pod \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.427949 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-ceph\") pod \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.428014 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-ssh-key\") pod \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\" (UID: \"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac\") " Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.441019 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-ceph" (OuterVolumeSpecName: "ceph") pod "621b6bcf-7a5c-4a85-9a8f-379e95bad6ac" (UID: "621b6bcf-7a5c-4a85-9a8f-379e95bad6ac"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.441659 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-kube-api-access-7z7nx" (OuterVolumeSpecName: "kube-api-access-7z7nx") pod "621b6bcf-7a5c-4a85-9a8f-379e95bad6ac" (UID: "621b6bcf-7a5c-4a85-9a8f-379e95bad6ac"). InnerVolumeSpecName "kube-api-access-7z7nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.460973 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "621b6bcf-7a5c-4a85-9a8f-379e95bad6ac" (UID: "621b6bcf-7a5c-4a85-9a8f-379e95bad6ac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.464889 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-inventory" (OuterVolumeSpecName: "inventory") pod "621b6bcf-7a5c-4a85-9a8f-379e95bad6ac" (UID: "621b6bcf-7a5c-4a85-9a8f-379e95bad6ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.530834 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.530873 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.530888 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z7nx\" (UniqueName: \"kubernetes.io/projected/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-kube-api-access-7z7nx\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.530902 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/621b6bcf-7a5c-4a85-9a8f-379e95bad6ac-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.882941 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" event={"ID":"621b6bcf-7a5c-4a85-9a8f-379e95bad6ac","Type":"ContainerDied","Data":"aa18e27acd440241f00ecaaad623a74e6c33fadb1de6995434907e274b7ba58c"} Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.883004 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa18e27acd440241f00ecaaad623a74e6c33fadb1de6995434907e274b7ba58c" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.883031 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gdnxb" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.979672 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg"] Nov 24 18:32:03 crc kubenswrapper[4768]: E1124 18:32:03.980730 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621b6bcf-7a5c-4a85-9a8f-379e95bad6ac" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.980770 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="621b6bcf-7a5c-4a85-9a8f-379e95bad6ac" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.981103 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="621b6bcf-7a5c-4a85-9a8f-379e95bad6ac" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.983990 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.986665 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.986942 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.987584 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.988910 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.989844 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:32:03 crc kubenswrapper[4768]: I1124 18:32:03.995301 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg"] Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.043254 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cpnb\" (UniqueName: \"kubernetes.io/projected/474b1f4d-271b-4abb-bad4-fef9d86fff99-kube-api-access-9cpnb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.043581 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.043660 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.043928 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.146124 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cpnb\" (UniqueName: \"kubernetes.io/projected/474b1f4d-271b-4abb-bad4-fef9d86fff99-kube-api-access-9cpnb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.146238 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.146326 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.146455 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.150546 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.152265 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.152456 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.166278 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cpnb\" (UniqueName: \"kubernetes.io/projected/474b1f4d-271b-4abb-bad4-fef9d86fff99-kube-api-access-9cpnb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.306017 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:04 crc kubenswrapper[4768]: I1124 18:32:04.887254 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg"] Nov 24 18:32:05 crc kubenswrapper[4768]: I1124 18:32:05.921527 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" event={"ID":"474b1f4d-271b-4abb-bad4-fef9d86fff99","Type":"ContainerStarted","Data":"3740ef2449bf50be605327d7e6e0b75fbbddf929f736ad655f7e385a181059a1"} Nov 24 18:32:05 crc kubenswrapper[4768]: I1124 18:32:05.922474 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" event={"ID":"474b1f4d-271b-4abb-bad4-fef9d86fff99","Type":"ContainerStarted","Data":"974880643af4d6de98f1a0706698284b6c880f372ca15b9dde73fa33dec1b981"} Nov 24 18:32:05 crc kubenswrapper[4768]: I1124 18:32:05.940657 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" podStartSLOduration=2.5017779 podStartE2EDuration="2.940638024s" podCreationTimestamp="2025-11-24 18:32:03 +0000 UTC" firstStartedPulling="2025-11-24 18:32:04.900909476 +0000 UTC m=+2563.761491253" lastFinishedPulling="2025-11-24 18:32:05.33976961 +0000 UTC m=+2564.200351377" observedRunningTime="2025-11-24 18:32:05.938808264 +0000 UTC m=+2564.799390071" watchObservedRunningTime="2025-11-24 18:32:05.940638024 +0000 UTC m=+2564.801219811" Nov 24 18:32:06 crc kubenswrapper[4768]: I1124 18:32:06.898399 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:32:06 crc kubenswrapper[4768]: E1124 18:32:06.898859 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:32:15 crc kubenswrapper[4768]: I1124 18:32:15.008443 4768 generic.go:334] "Generic (PLEG): container finished" podID="474b1f4d-271b-4abb-bad4-fef9d86fff99" containerID="3740ef2449bf50be605327d7e6e0b75fbbddf929f736ad655f7e385a181059a1" exitCode=0 Nov 24 18:32:15 crc kubenswrapper[4768]: I1124 18:32:15.008538 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" event={"ID":"474b1f4d-271b-4abb-bad4-fef9d86fff99","Type":"ContainerDied","Data":"3740ef2449bf50be605327d7e6e0b75fbbddf929f736ad655f7e385a181059a1"} Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.440715 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.495333 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-ssh-key\") pod \"474b1f4d-271b-4abb-bad4-fef9d86fff99\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.495661 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-ceph\") pod \"474b1f4d-271b-4abb-bad4-fef9d86fff99\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.495709 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cpnb\" (UniqueName: \"kubernetes.io/projected/474b1f4d-271b-4abb-bad4-fef9d86fff99-kube-api-access-9cpnb\") pod \"474b1f4d-271b-4abb-bad4-fef9d86fff99\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.495901 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-inventory\") pod \"474b1f4d-271b-4abb-bad4-fef9d86fff99\" (UID: \"474b1f4d-271b-4abb-bad4-fef9d86fff99\") " Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.502515 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/474b1f4d-271b-4abb-bad4-fef9d86fff99-kube-api-access-9cpnb" (OuterVolumeSpecName: "kube-api-access-9cpnb") pod "474b1f4d-271b-4abb-bad4-fef9d86fff99" (UID: "474b1f4d-271b-4abb-bad4-fef9d86fff99"). InnerVolumeSpecName "kube-api-access-9cpnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.502918 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-ceph" (OuterVolumeSpecName: "ceph") pod "474b1f4d-271b-4abb-bad4-fef9d86fff99" (UID: "474b1f4d-271b-4abb-bad4-fef9d86fff99"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.526863 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "474b1f4d-271b-4abb-bad4-fef9d86fff99" (UID: "474b1f4d-271b-4abb-bad4-fef9d86fff99"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.528768 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-inventory" (OuterVolumeSpecName: "inventory") pod "474b1f4d-271b-4abb-bad4-fef9d86fff99" (UID: "474b1f4d-271b-4abb-bad4-fef9d86fff99"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.597813 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.597864 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cpnb\" (UniqueName: \"kubernetes.io/projected/474b1f4d-271b-4abb-bad4-fef9d86fff99-kube-api-access-9cpnb\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.597880 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:16 crc kubenswrapper[4768]: I1124 18:32:16.597892 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/474b1f4d-271b-4abb-bad4-fef9d86fff99-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.034193 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" event={"ID":"474b1f4d-271b-4abb-bad4-fef9d86fff99","Type":"ContainerDied","Data":"974880643af4d6de98f1a0706698284b6c880f372ca15b9dde73fa33dec1b981"} Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.034251 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974880643af4d6de98f1a0706698284b6c880f372ca15b9dde73fa33dec1b981" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.034298 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.176997 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk"] Nov 24 18:32:17 crc kubenswrapper[4768]: E1124 18:32:17.178042 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474b1f4d-271b-4abb-bad4-fef9d86fff99" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.178068 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="474b1f4d-271b-4abb-bad4-fef9d86fff99" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.178297 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="474b1f4d-271b-4abb-bad4-fef9d86fff99" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.179252 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.182082 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.182448 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.182899 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.183042 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.183178 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.183501 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.183669 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.189895 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.201406 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk"] Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312293 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312364 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312408 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312467 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312520 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312559 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312592 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqm6j\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-kube-api-access-sqm6j\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312622 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312693 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312719 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312761 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312796 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.312818 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.414780 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.414848 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.414903 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.414954 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.414988 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.415028 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.415072 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.415126 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.415195 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.415262 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.415308 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.415368 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqm6j\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-kube-api-access-sqm6j\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.415413 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.420015 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.420056 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.421702 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.421875 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.422072 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.422401 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.422547 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.422884 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.423867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.424036 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.424472 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.424977 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.443731 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqm6j\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-kube-api-access-sqm6j\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.505781 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:17 crc kubenswrapper[4768]: I1124 18:32:17.898931 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:32:18 crc kubenswrapper[4768]: I1124 18:32:18.151922 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk"] Nov 24 18:32:18 crc kubenswrapper[4768]: W1124 18:32:18.162336 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5889b94_1134_4803_88de_f82ae87f5720.slice/crio-02b020e63398a7e737008c6f3ab72c2fd70a1eb810874fda88e88cc86d91cc52 WatchSource:0}: Error finding container 02b020e63398a7e737008c6f3ab72c2fd70a1eb810874fda88e88cc86d91cc52: Status 404 returned error can't find the container with id 02b020e63398a7e737008c6f3ab72c2fd70a1eb810874fda88e88cc86d91cc52 Nov 24 18:32:19 crc kubenswrapper[4768]: I1124 18:32:19.061423 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"d1313295cd9893d7f5d85adba62cfd4ff2acd960ce0acf6f5d9782bb54fd8f87"} Nov 24 18:32:19 crc kubenswrapper[4768]: I1124 18:32:19.064868 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" event={"ID":"f5889b94-1134-4803-88de-f82ae87f5720","Type":"ContainerStarted","Data":"02b020e63398a7e737008c6f3ab72c2fd70a1eb810874fda88e88cc86d91cc52"} Nov 24 18:32:20 crc kubenswrapper[4768]: I1124 18:32:20.078865 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" event={"ID":"f5889b94-1134-4803-88de-f82ae87f5720","Type":"ContainerStarted","Data":"048124859d2e393ff6d5c45ae98f857585f287d2150e3a497ecc950279b60693"} Nov 24 18:32:20 crc kubenswrapper[4768]: I1124 18:32:20.112453 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" podStartSLOduration=2.579900818 podStartE2EDuration="3.112409598s" podCreationTimestamp="2025-11-24 18:32:17 +0000 UTC" firstStartedPulling="2025-11-24 18:32:18.166329179 +0000 UTC m=+2577.026910976" lastFinishedPulling="2025-11-24 18:32:18.698837949 +0000 UTC m=+2577.559419756" observedRunningTime="2025-11-24 18:32:20.108301467 +0000 UTC m=+2578.968883264" watchObservedRunningTime="2025-11-24 18:32:20.112409598 +0000 UTC m=+2578.972991395" Nov 24 18:32:51 crc kubenswrapper[4768]: I1124 18:32:51.422748 4768 generic.go:334] "Generic (PLEG): container finished" podID="f5889b94-1134-4803-88de-f82ae87f5720" containerID="048124859d2e393ff6d5c45ae98f857585f287d2150e3a497ecc950279b60693" exitCode=0 Nov 24 18:32:51 crc kubenswrapper[4768]: I1124 18:32:51.422888 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" event={"ID":"f5889b94-1134-4803-88de-f82ae87f5720","Type":"ContainerDied","Data":"048124859d2e393ff6d5c45ae98f857585f287d2150e3a497ecc950279b60693"} Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.913863 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992149 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-inventory\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992448 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992566 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992596 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-neutron-metadata-combined-ca-bundle\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992620 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ssh-key\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992660 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-libvirt-combined-ca-bundle\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992703 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-ovn-default-certs-0\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992733 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-repo-setup-combined-ca-bundle\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992758 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ovn-combined-ca-bundle\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992795 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqm6j\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-kube-api-access-sqm6j\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992812 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-bootstrap-combined-ca-bundle\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992842 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-nova-combined-ca-bundle\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:52 crc kubenswrapper[4768]: I1124 18:32:52.992911 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ceph\") pod \"f5889b94-1134-4803-88de-f82ae87f5720\" (UID: \"f5889b94-1134-4803-88de-f82ae87f5720\") " Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.001850 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.003283 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.003355 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.003394 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.003461 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.003958 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.004642 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.006701 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ceph" (OuterVolumeSpecName: "ceph") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.006767 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.009190 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-kube-api-access-sqm6j" (OuterVolumeSpecName: "kube-api-access-sqm6j") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "kube-api-access-sqm6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.009364 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.031944 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.032376 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-inventory" (OuterVolumeSpecName: "inventory") pod "f5889b94-1134-4803-88de-f82ae87f5720" (UID: "f5889b94-1134-4803-88de-f82ae87f5720"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.095934 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqm6j\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-kube-api-access-sqm6j\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.095967 4768 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.095978 4768 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.095989 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.095999 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.096008 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.096019 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.096031 4768 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.096040 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.096049 4768 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.096060 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f5889b94-1134-4803-88de-f82ae87f5720-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.096069 4768 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.096081 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5889b94-1134-4803-88de-f82ae87f5720-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.452372 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" event={"ID":"f5889b94-1134-4803-88de-f82ae87f5720","Type":"ContainerDied","Data":"02b020e63398a7e737008c6f3ab72c2fd70a1eb810874fda88e88cc86d91cc52"} Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.452446 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02b020e63398a7e737008c6f3ab72c2fd70a1eb810874fda88e88cc86d91cc52" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.452574 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.550905 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8"] Nov 24 18:32:53 crc kubenswrapper[4768]: E1124 18:32:53.551311 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5889b94-1134-4803-88de-f82ae87f5720" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.551329 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5889b94-1134-4803-88de-f82ae87f5720" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.551508 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5889b94-1134-4803-88de-f82ae87f5720" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.552162 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.554970 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.554979 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.555430 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.555671 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.556129 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.567971 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8"] Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.608257 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvqhg\" (UniqueName: \"kubernetes.io/projected/d974ce0f-88e9-465d-9c74-6a7531593c4b-kube-api-access-rvqhg\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.608321 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.608425 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.608650 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.710869 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.711373 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.711710 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvqhg\" (UniqueName: \"kubernetes.io/projected/d974ce0f-88e9-465d-9c74-6a7531593c4b-kube-api-access-rvqhg\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.711955 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.716171 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.716524 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.719648 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.732260 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvqhg\" (UniqueName: \"kubernetes.io/projected/d974ce0f-88e9-465d-9c74-6a7531593c4b-kube-api-access-rvqhg\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:53 crc kubenswrapper[4768]: I1124 18:32:53.871032 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:32:54 crc kubenswrapper[4768]: I1124 18:32:54.488790 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8"] Nov 24 18:32:55 crc kubenswrapper[4768]: I1124 18:32:55.474871 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" event={"ID":"d974ce0f-88e9-465d-9c74-6a7531593c4b","Type":"ContainerStarted","Data":"82eb4f1ee94fa29b068cf06bbffede0b3ef5955d26b9e8ca1f22f0c5d8ac1a56"} Nov 24 18:32:56 crc kubenswrapper[4768]: I1124 18:32:56.485052 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" event={"ID":"d974ce0f-88e9-465d-9c74-6a7531593c4b","Type":"ContainerStarted","Data":"8e5df32b6e786cbbed98f53574f87b71b76c9740c712f97fdda8eda5e8382c96"} Nov 24 18:32:56 crc kubenswrapper[4768]: I1124 18:32:56.506248 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" podStartSLOduration=2.702465317 podStartE2EDuration="3.50622696s" podCreationTimestamp="2025-11-24 18:32:53 +0000 UTC" firstStartedPulling="2025-11-24 18:32:54.499312201 +0000 UTC m=+2613.359893988" lastFinishedPulling="2025-11-24 18:32:55.303073854 +0000 UTC m=+2614.163655631" observedRunningTime="2025-11-24 18:32:56.499376274 +0000 UTC m=+2615.359958071" watchObservedRunningTime="2025-11-24 18:32:56.50622696 +0000 UTC m=+2615.366808737" Nov 24 18:33:01 crc kubenswrapper[4768]: I1124 18:33:01.537110 4768 generic.go:334] "Generic (PLEG): container finished" podID="d974ce0f-88e9-465d-9c74-6a7531593c4b" containerID="8e5df32b6e786cbbed98f53574f87b71b76c9740c712f97fdda8eda5e8382c96" exitCode=0 Nov 24 18:33:01 crc kubenswrapper[4768]: I1124 18:33:01.537210 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" event={"ID":"d974ce0f-88e9-465d-9c74-6a7531593c4b","Type":"ContainerDied","Data":"8e5df32b6e786cbbed98f53574f87b71b76c9740c712f97fdda8eda5e8382c96"} Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.000267 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.113380 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-ssh-key\") pod \"d974ce0f-88e9-465d-9c74-6a7531593c4b\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.113463 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-inventory\") pod \"d974ce0f-88e9-465d-9c74-6a7531593c4b\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.113633 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-ceph\") pod \"d974ce0f-88e9-465d-9c74-6a7531593c4b\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.113791 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvqhg\" (UniqueName: \"kubernetes.io/projected/d974ce0f-88e9-465d-9c74-6a7531593c4b-kube-api-access-rvqhg\") pod \"d974ce0f-88e9-465d-9c74-6a7531593c4b\" (UID: \"d974ce0f-88e9-465d-9c74-6a7531593c4b\") " Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.121159 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d974ce0f-88e9-465d-9c74-6a7531593c4b-kube-api-access-rvqhg" (OuterVolumeSpecName: "kube-api-access-rvqhg") pod "d974ce0f-88e9-465d-9c74-6a7531593c4b" (UID: "d974ce0f-88e9-465d-9c74-6a7531593c4b"). InnerVolumeSpecName "kube-api-access-rvqhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.121745 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-ceph" (OuterVolumeSpecName: "ceph") pod "d974ce0f-88e9-465d-9c74-6a7531593c4b" (UID: "d974ce0f-88e9-465d-9c74-6a7531593c4b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.140339 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-inventory" (OuterVolumeSpecName: "inventory") pod "d974ce0f-88e9-465d-9c74-6a7531593c4b" (UID: "d974ce0f-88e9-465d-9c74-6a7531593c4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.151245 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d974ce0f-88e9-465d-9c74-6a7531593c4b" (UID: "d974ce0f-88e9-465d-9c74-6a7531593c4b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.215765 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.215791 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.215804 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d974ce0f-88e9-465d-9c74-6a7531593c4b-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.215817 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvqhg\" (UniqueName: \"kubernetes.io/projected/d974ce0f-88e9-465d-9c74-6a7531593c4b-kube-api-access-rvqhg\") on node \"crc\" DevicePath \"\"" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.561250 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" event={"ID":"d974ce0f-88e9-465d-9c74-6a7531593c4b","Type":"ContainerDied","Data":"82eb4f1ee94fa29b068cf06bbffede0b3ef5955d26b9e8ca1f22f0c5d8ac1a56"} Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.561299 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82eb4f1ee94fa29b068cf06bbffede0b3ef5955d26b9e8ca1f22f0c5d8ac1a56" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.561354 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.647902 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w"] Nov 24 18:33:03 crc kubenswrapper[4768]: E1124 18:33:03.648535 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d974ce0f-88e9-465d-9c74-6a7531593c4b" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.648563 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d974ce0f-88e9-465d-9c74-6a7531593c4b" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.648838 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d974ce0f-88e9-465d-9c74-6a7531593c4b" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.649868 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.652813 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.653098 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.653645 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.653646 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.653825 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.653862 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.659911 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w"] Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.828146 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.828591 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.828668 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.828776 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk4zh\" (UniqueName: \"kubernetes.io/projected/fd87ee72-91d9-40a2-a95f-f4358b524d8f-kube-api-access-nk4zh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.828835 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.828861 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.930864 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.930922 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.930981 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.931033 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.931083 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.931138 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk4zh\" (UniqueName: \"kubernetes.io/projected/fd87ee72-91d9-40a2-a95f-f4358b524d8f-kube-api-access-nk4zh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.933135 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.937775 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.937937 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.938055 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.938246 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.951074 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk4zh\" (UniqueName: \"kubernetes.io/projected/fd87ee72-91d9-40a2-a95f-f4358b524d8f-kube-api-access-nk4zh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zcf2w\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:03 crc kubenswrapper[4768]: I1124 18:33:03.977432 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:33:04 crc kubenswrapper[4768]: I1124 18:33:04.517616 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w"] Nov 24 18:33:04 crc kubenswrapper[4768]: I1124 18:33:04.575675 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" event={"ID":"fd87ee72-91d9-40a2-a95f-f4358b524d8f","Type":"ContainerStarted","Data":"983db268a8a765161c100c94b3966c0911b226aef58ca646ebfef31f38ff97db"} Nov 24 18:33:05 crc kubenswrapper[4768]: I1124 18:33:05.595479 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" event={"ID":"fd87ee72-91d9-40a2-a95f-f4358b524d8f","Type":"ContainerStarted","Data":"e10685e436548c493b1c818a8fe3f220a532edf9e1b587023a9cd1d5de4ff4e1"} Nov 24 18:33:05 crc kubenswrapper[4768]: I1124 18:33:05.628914 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" podStartSLOduration=2.17258579 podStartE2EDuration="2.628885085s" podCreationTimestamp="2025-11-24 18:33:03 +0000 UTC" firstStartedPulling="2025-11-24 18:33:04.524066042 +0000 UTC m=+2623.384647859" lastFinishedPulling="2025-11-24 18:33:04.980365387 +0000 UTC m=+2623.840947154" observedRunningTime="2025-11-24 18:33:05.614875427 +0000 UTC m=+2624.475457204" watchObservedRunningTime="2025-11-24 18:33:05.628885085 +0000 UTC m=+2624.489466862" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.778208 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-566lr"] Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.780781 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.791079 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f4daa20-719e-4694-a368-9de45d70e84f-catalog-content\") pod \"certified-operators-566lr\" (UID: \"6f4daa20-719e-4694-a368-9de45d70e84f\") " pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.791158 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f4daa20-719e-4694-a368-9de45d70e84f-utilities\") pod \"certified-operators-566lr\" (UID: \"6f4daa20-719e-4694-a368-9de45d70e84f\") " pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.791692 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7d9v\" (UniqueName: \"kubernetes.io/projected/6f4daa20-719e-4694-a368-9de45d70e84f-kube-api-access-t7d9v\") pod \"certified-operators-566lr\" (UID: \"6f4daa20-719e-4694-a368-9de45d70e84f\") " pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.794074 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-566lr"] Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.893559 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f4daa20-719e-4694-a368-9de45d70e84f-catalog-content\") pod \"certified-operators-566lr\" (UID: \"6f4daa20-719e-4694-a368-9de45d70e84f\") " pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.893941 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f4daa20-719e-4694-a368-9de45d70e84f-utilities\") pod \"certified-operators-566lr\" (UID: \"6f4daa20-719e-4694-a368-9de45d70e84f\") " pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.894118 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7d9v\" (UniqueName: \"kubernetes.io/projected/6f4daa20-719e-4694-a368-9de45d70e84f-kube-api-access-t7d9v\") pod \"certified-operators-566lr\" (UID: \"6f4daa20-719e-4694-a368-9de45d70e84f\") " pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.894114 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f4daa20-719e-4694-a368-9de45d70e84f-catalog-content\") pod \"certified-operators-566lr\" (UID: \"6f4daa20-719e-4694-a368-9de45d70e84f\") " pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.894358 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f4daa20-719e-4694-a368-9de45d70e84f-utilities\") pod \"certified-operators-566lr\" (UID: \"6f4daa20-719e-4694-a368-9de45d70e84f\") " pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.936618 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7d9v\" (UniqueName: \"kubernetes.io/projected/6f4daa20-719e-4694-a368-9de45d70e84f-kube-api-access-t7d9v\") pod \"certified-operators-566lr\" (UID: \"6f4daa20-719e-4694-a368-9de45d70e84f\") " pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.978443 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjsjr"] Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.983862 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.994739 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac32b1d2-20bd-47cd-992f-f668c68a4a86-utilities\") pod \"community-operators-gjsjr\" (UID: \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\") " pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.994845 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac32b1d2-20bd-47cd-992f-f668c68a4a86-catalog-content\") pod \"community-operators-gjsjr\" (UID: \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\") " pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.994948 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks6z7\" (UniqueName: \"kubernetes.io/projected/ac32b1d2-20bd-47cd-992f-f668c68a4a86-kube-api-access-ks6z7\") pod \"community-operators-gjsjr\" (UID: \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\") " pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:33:56 crc kubenswrapper[4768]: I1124 18:33:56.999471 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjsjr"] Nov 24 18:33:57 crc kubenswrapper[4768]: I1124 18:33:57.096352 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac32b1d2-20bd-47cd-992f-f668c68a4a86-catalog-content\") pod \"community-operators-gjsjr\" (UID: \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\") " pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:33:57 crc kubenswrapper[4768]: I1124 18:33:57.096564 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks6z7\" (UniqueName: \"kubernetes.io/projected/ac32b1d2-20bd-47cd-992f-f668c68a4a86-kube-api-access-ks6z7\") pod \"community-operators-gjsjr\" (UID: \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\") " pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:33:57 crc kubenswrapper[4768]: I1124 18:33:57.096612 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac32b1d2-20bd-47cd-992f-f668c68a4a86-utilities\") pod \"community-operators-gjsjr\" (UID: \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\") " pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:33:57 crc kubenswrapper[4768]: I1124 18:33:57.097121 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac32b1d2-20bd-47cd-992f-f668c68a4a86-utilities\") pod \"community-operators-gjsjr\" (UID: \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\") " pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:33:57 crc kubenswrapper[4768]: I1124 18:33:57.097750 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac32b1d2-20bd-47cd-992f-f668c68a4a86-catalog-content\") pod \"community-operators-gjsjr\" (UID: \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\") " pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:33:57 crc kubenswrapper[4768]: I1124 18:33:57.105615 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:33:57 crc kubenswrapper[4768]: I1124 18:33:57.117125 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks6z7\" (UniqueName: \"kubernetes.io/projected/ac32b1d2-20bd-47cd-992f-f668c68a4a86-kube-api-access-ks6z7\") pod \"community-operators-gjsjr\" (UID: \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\") " pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:33:57 crc kubenswrapper[4768]: I1124 18:33:57.314172 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:33:57 crc kubenswrapper[4768]: I1124 18:33:57.635297 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-566lr"] Nov 24 18:33:57 crc kubenswrapper[4768]: I1124 18:33:57.716837 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjsjr"] Nov 24 18:33:58 crc kubenswrapper[4768]: E1124 18:33:58.096827 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac32b1d2_20bd_47cd_992f_f668c68a4a86.slice/crio-conmon-f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f4daa20_719e_4694_a368_9de45d70e84f.slice/crio-conmon-1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f4daa20_719e_4694_a368_9de45d70e84f.slice/crio-1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac32b1d2_20bd_47cd_992f_f668c68a4a86.slice/crio-f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022.scope\": RecentStats: unable to find data in memory cache]" Nov 24 18:33:58 crc kubenswrapper[4768]: I1124 18:33:58.163710 4768 generic.go:334] "Generic (PLEG): container finished" podID="6f4daa20-719e-4694-a368-9de45d70e84f" containerID="1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5" exitCode=0 Nov 24 18:33:58 crc kubenswrapper[4768]: I1124 18:33:58.163794 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566lr" event={"ID":"6f4daa20-719e-4694-a368-9de45d70e84f","Type":"ContainerDied","Data":"1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5"} Nov 24 18:33:58 crc kubenswrapper[4768]: I1124 18:33:58.163824 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566lr" event={"ID":"6f4daa20-719e-4694-a368-9de45d70e84f","Type":"ContainerStarted","Data":"48bc61af758b658de3b7639e6f394f3de68054d498f04de2e6499b42dec578d6"} Nov 24 18:33:58 crc kubenswrapper[4768]: I1124 18:33:58.165547 4768 generic.go:334] "Generic (PLEG): container finished" podID="ac32b1d2-20bd-47cd-992f-f668c68a4a86" containerID="f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022" exitCode=0 Nov 24 18:33:58 crc kubenswrapper[4768]: I1124 18:33:58.165598 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjsjr" event={"ID":"ac32b1d2-20bd-47cd-992f-f668c68a4a86","Type":"ContainerDied","Data":"f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022"} Nov 24 18:33:58 crc kubenswrapper[4768]: I1124 18:33:58.165627 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjsjr" event={"ID":"ac32b1d2-20bd-47cd-992f-f668c68a4a86","Type":"ContainerStarted","Data":"e7fe630363494cdfaf989bdb07dec4921853d7aaf949f648909e66134d8a5ed1"} Nov 24 18:33:59 crc kubenswrapper[4768]: I1124 18:33:59.184209 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566lr" event={"ID":"6f4daa20-719e-4694-a368-9de45d70e84f","Type":"ContainerStarted","Data":"3bc7c8af0ec81d3c13b0b7307e2fb7de8d0cb03bcefd644d3bd1c0d9cd3b5b87"} Nov 24 18:33:59 crc kubenswrapper[4768]: I1124 18:33:59.187641 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjsjr" event={"ID":"ac32b1d2-20bd-47cd-992f-f668c68a4a86","Type":"ContainerStarted","Data":"842759d97ce7842318fc2d89c5b3d71804170aa974676b036451c3e411cd5946"} Nov 24 18:34:00 crc kubenswrapper[4768]: I1124 18:34:00.201282 4768 generic.go:334] "Generic (PLEG): container finished" podID="6f4daa20-719e-4694-a368-9de45d70e84f" containerID="3bc7c8af0ec81d3c13b0b7307e2fb7de8d0cb03bcefd644d3bd1c0d9cd3b5b87" exitCode=0 Nov 24 18:34:00 crc kubenswrapper[4768]: I1124 18:34:00.201409 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566lr" event={"ID":"6f4daa20-719e-4694-a368-9de45d70e84f","Type":"ContainerDied","Data":"3bc7c8af0ec81d3c13b0b7307e2fb7de8d0cb03bcefd644d3bd1c0d9cd3b5b87"} Nov 24 18:34:00 crc kubenswrapper[4768]: I1124 18:34:00.204484 4768 generic.go:334] "Generic (PLEG): container finished" podID="ac32b1d2-20bd-47cd-992f-f668c68a4a86" containerID="842759d97ce7842318fc2d89c5b3d71804170aa974676b036451c3e411cd5946" exitCode=0 Nov 24 18:34:00 crc kubenswrapper[4768]: I1124 18:34:00.204533 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjsjr" event={"ID":"ac32b1d2-20bd-47cd-992f-f668c68a4a86","Type":"ContainerDied","Data":"842759d97ce7842318fc2d89c5b3d71804170aa974676b036451c3e411cd5946"} Nov 24 18:34:01 crc kubenswrapper[4768]: I1124 18:34:01.216256 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566lr" event={"ID":"6f4daa20-719e-4694-a368-9de45d70e84f","Type":"ContainerStarted","Data":"913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6"} Nov 24 18:34:01 crc kubenswrapper[4768]: I1124 18:34:01.220219 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjsjr" event={"ID":"ac32b1d2-20bd-47cd-992f-f668c68a4a86","Type":"ContainerStarted","Data":"7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071"} Nov 24 18:34:01 crc kubenswrapper[4768]: I1124 18:34:01.244352 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-566lr" podStartSLOduration=2.725709399 podStartE2EDuration="5.244327767s" podCreationTimestamp="2025-11-24 18:33:56 +0000 UTC" firstStartedPulling="2025-11-24 18:33:58.166473759 +0000 UTC m=+2677.027055536" lastFinishedPulling="2025-11-24 18:34:00.685092137 +0000 UTC m=+2679.545673904" observedRunningTime="2025-11-24 18:34:01.234888973 +0000 UTC m=+2680.095470750" watchObservedRunningTime="2025-11-24 18:34:01.244327767 +0000 UTC m=+2680.104909544" Nov 24 18:34:01 crc kubenswrapper[4768]: I1124 18:34:01.260965 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjsjr" podStartSLOduration=2.755662497 podStartE2EDuration="5.260938936s" podCreationTimestamp="2025-11-24 18:33:56 +0000 UTC" firstStartedPulling="2025-11-24 18:33:58.167516046 +0000 UTC m=+2677.028097833" lastFinishedPulling="2025-11-24 18:34:00.672792495 +0000 UTC m=+2679.533374272" observedRunningTime="2025-11-24 18:34:01.259928539 +0000 UTC m=+2680.120510316" watchObservedRunningTime="2025-11-24 18:34:01.260938936 +0000 UTC m=+2680.121520713" Nov 24 18:34:07 crc kubenswrapper[4768]: I1124 18:34:07.106066 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:34:07 crc kubenswrapper[4768]: I1124 18:34:07.106574 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:34:07 crc kubenswrapper[4768]: I1124 18:34:07.175892 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:34:07 crc kubenswrapper[4768]: I1124 18:34:07.315306 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:34:07 crc kubenswrapper[4768]: I1124 18:34:07.315686 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:34:07 crc kubenswrapper[4768]: I1124 18:34:07.367514 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:34:07 crc kubenswrapper[4768]: I1124 18:34:07.382120 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:34:08 crc kubenswrapper[4768]: I1124 18:34:08.340896 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:34:09 crc kubenswrapper[4768]: I1124 18:34:09.564573 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-566lr"] Nov 24 18:34:09 crc kubenswrapper[4768]: I1124 18:34:09.565116 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-566lr" podUID="6f4daa20-719e-4694-a368-9de45d70e84f" containerName="registry-server" containerID="cri-o://913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6" gracePeriod=2 Nov 24 18:34:09 crc kubenswrapper[4768]: I1124 18:34:09.768516 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjsjr"] Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.054993 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.194247 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f4daa20-719e-4694-a368-9de45d70e84f-utilities\") pod \"6f4daa20-719e-4694-a368-9de45d70e84f\" (UID: \"6f4daa20-719e-4694-a368-9de45d70e84f\") " Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.194309 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f4daa20-719e-4694-a368-9de45d70e84f-catalog-content\") pod \"6f4daa20-719e-4694-a368-9de45d70e84f\" (UID: \"6f4daa20-719e-4694-a368-9de45d70e84f\") " Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.194336 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7d9v\" (UniqueName: \"kubernetes.io/projected/6f4daa20-719e-4694-a368-9de45d70e84f-kube-api-access-t7d9v\") pod \"6f4daa20-719e-4694-a368-9de45d70e84f\" (UID: \"6f4daa20-719e-4694-a368-9de45d70e84f\") " Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.196127 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f4daa20-719e-4694-a368-9de45d70e84f-utilities" (OuterVolumeSpecName: "utilities") pod "6f4daa20-719e-4694-a368-9de45d70e84f" (UID: "6f4daa20-719e-4694-a368-9de45d70e84f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.204614 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f4daa20-719e-4694-a368-9de45d70e84f-kube-api-access-t7d9v" (OuterVolumeSpecName: "kube-api-access-t7d9v") pod "6f4daa20-719e-4694-a368-9de45d70e84f" (UID: "6f4daa20-719e-4694-a368-9de45d70e84f"). InnerVolumeSpecName "kube-api-access-t7d9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.254058 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f4daa20-719e-4694-a368-9de45d70e84f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f4daa20-719e-4694-a368-9de45d70e84f" (UID: "6f4daa20-719e-4694-a368-9de45d70e84f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.297019 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f4daa20-719e-4694-a368-9de45d70e84f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.297211 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f4daa20-719e-4694-a368-9de45d70e84f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.297324 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7d9v\" (UniqueName: \"kubernetes.io/projected/6f4daa20-719e-4694-a368-9de45d70e84f-kube-api-access-t7d9v\") on node \"crc\" DevicePath \"\"" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.322704 4768 generic.go:334] "Generic (PLEG): container finished" podID="6f4daa20-719e-4694-a368-9de45d70e84f" containerID="913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6" exitCode=0 Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.322774 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566lr" event={"ID":"6f4daa20-719e-4694-a368-9de45d70e84f","Type":"ContainerDied","Data":"913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6"} Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.322849 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-566lr" event={"ID":"6f4daa20-719e-4694-a368-9de45d70e84f","Type":"ContainerDied","Data":"48bc61af758b658de3b7639e6f394f3de68054d498f04de2e6499b42dec578d6"} Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.322885 4768 scope.go:117] "RemoveContainer" containerID="913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.323010 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gjsjr" podUID="ac32b1d2-20bd-47cd-992f-f668c68a4a86" containerName="registry-server" containerID="cri-o://7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071" gracePeriod=2 Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.323355 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-566lr" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.354867 4768 scope.go:117] "RemoveContainer" containerID="3bc7c8af0ec81d3c13b0b7307e2fb7de8d0cb03bcefd644d3bd1c0d9cd3b5b87" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.370272 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-566lr"] Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.384390 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-566lr"] Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.452628 4768 scope.go:117] "RemoveContainer" containerID="1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.517457 4768 scope.go:117] "RemoveContainer" containerID="913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6" Nov 24 18:34:10 crc kubenswrapper[4768]: E1124 18:34:10.518109 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6\": container with ID starting with 913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6 not found: ID does not exist" containerID="913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.518153 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6"} err="failed to get container status \"913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6\": rpc error: code = NotFound desc = could not find container \"913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6\": container with ID starting with 913a13d21a0fa21bbe3f137d85f241a813c326a8af5674cd4c50d5844c217eb6 not found: ID does not exist" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.518189 4768 scope.go:117] "RemoveContainer" containerID="3bc7c8af0ec81d3c13b0b7307e2fb7de8d0cb03bcefd644d3bd1c0d9cd3b5b87" Nov 24 18:34:10 crc kubenswrapper[4768]: E1124 18:34:10.519306 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc7c8af0ec81d3c13b0b7307e2fb7de8d0cb03bcefd644d3bd1c0d9cd3b5b87\": container with ID starting with 3bc7c8af0ec81d3c13b0b7307e2fb7de8d0cb03bcefd644d3bd1c0d9cd3b5b87 not found: ID does not exist" containerID="3bc7c8af0ec81d3c13b0b7307e2fb7de8d0cb03bcefd644d3bd1c0d9cd3b5b87" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.519376 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc7c8af0ec81d3c13b0b7307e2fb7de8d0cb03bcefd644d3bd1c0d9cd3b5b87"} err="failed to get container status \"3bc7c8af0ec81d3c13b0b7307e2fb7de8d0cb03bcefd644d3bd1c0d9cd3b5b87\": rpc error: code = NotFound desc = could not find container \"3bc7c8af0ec81d3c13b0b7307e2fb7de8d0cb03bcefd644d3bd1c0d9cd3b5b87\": container with ID starting with 3bc7c8af0ec81d3c13b0b7307e2fb7de8d0cb03bcefd644d3bd1c0d9cd3b5b87 not found: ID does not exist" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.519424 4768 scope.go:117] "RemoveContainer" containerID="1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5" Nov 24 18:34:10 crc kubenswrapper[4768]: E1124 18:34:10.520129 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5\": container with ID starting with 1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5 not found: ID does not exist" containerID="1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.520181 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5"} err="failed to get container status \"1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5\": rpc error: code = NotFound desc = could not find container \"1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5\": container with ID starting with 1d96d0a23f8c7757d3e06da930233a61b6ab8bb89975783efca52c014cc447f5 not found: ID does not exist" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.775402 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.909362 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac32b1d2-20bd-47cd-992f-f668c68a4a86-utilities\") pod \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\" (UID: \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\") " Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.909452 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6z7\" (UniqueName: \"kubernetes.io/projected/ac32b1d2-20bd-47cd-992f-f668c68a4a86-kube-api-access-ks6z7\") pod \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\" (UID: \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\") " Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.909584 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac32b1d2-20bd-47cd-992f-f668c68a4a86-catalog-content\") pod \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\" (UID: \"ac32b1d2-20bd-47cd-992f-f668c68a4a86\") " Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.911076 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac32b1d2-20bd-47cd-992f-f668c68a4a86-utilities" (OuterVolumeSpecName: "utilities") pod "ac32b1d2-20bd-47cd-992f-f668c68a4a86" (UID: "ac32b1d2-20bd-47cd-992f-f668c68a4a86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.915718 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac32b1d2-20bd-47cd-992f-f668c68a4a86-kube-api-access-ks6z7" (OuterVolumeSpecName: "kube-api-access-ks6z7") pod "ac32b1d2-20bd-47cd-992f-f668c68a4a86" (UID: "ac32b1d2-20bd-47cd-992f-f668c68a4a86"). InnerVolumeSpecName "kube-api-access-ks6z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:34:10 crc kubenswrapper[4768]: I1124 18:34:10.991424 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac32b1d2-20bd-47cd-992f-f668c68a4a86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac32b1d2-20bd-47cd-992f-f668c68a4a86" (UID: "ac32b1d2-20bd-47cd-992f-f668c68a4a86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.012222 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac32b1d2-20bd-47cd-992f-f668c68a4a86-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.012300 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks6z7\" (UniqueName: \"kubernetes.io/projected/ac32b1d2-20bd-47cd-992f-f668c68a4a86-kube-api-access-ks6z7\") on node \"crc\" DevicePath \"\"" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.012332 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac32b1d2-20bd-47cd-992f-f668c68a4a86-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.338568 4768 generic.go:334] "Generic (PLEG): container finished" podID="ac32b1d2-20bd-47cd-992f-f668c68a4a86" containerID="7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071" exitCode=0 Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.338620 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjsjr" event={"ID":"ac32b1d2-20bd-47cd-992f-f668c68a4a86","Type":"ContainerDied","Data":"7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071"} Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.338649 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjsjr" event={"ID":"ac32b1d2-20bd-47cd-992f-f668c68a4a86","Type":"ContainerDied","Data":"e7fe630363494cdfaf989bdb07dec4921853d7aaf949f648909e66134d8a5ed1"} Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.338667 4768 scope.go:117] "RemoveContainer" containerID="7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.338712 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjsjr" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.366928 4768 scope.go:117] "RemoveContainer" containerID="842759d97ce7842318fc2d89c5b3d71804170aa974676b036451c3e411cd5946" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.395352 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjsjr"] Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.422261 4768 scope.go:117] "RemoveContainer" containerID="f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.436766 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gjsjr"] Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.464004 4768 scope.go:117] "RemoveContainer" containerID="7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071" Nov 24 18:34:11 crc kubenswrapper[4768]: E1124 18:34:11.464573 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071\": container with ID starting with 7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071 not found: ID does not exist" containerID="7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.464649 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071"} err="failed to get container status \"7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071\": rpc error: code = NotFound desc = could not find container \"7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071\": container with ID starting with 7fcd02eb45752d34e8f89bce51f1b58d2d57631728e91cf764ea0a883ea00071 not found: ID does not exist" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.464689 4768 scope.go:117] "RemoveContainer" containerID="842759d97ce7842318fc2d89c5b3d71804170aa974676b036451c3e411cd5946" Nov 24 18:34:11 crc kubenswrapper[4768]: E1124 18:34:11.465187 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"842759d97ce7842318fc2d89c5b3d71804170aa974676b036451c3e411cd5946\": container with ID starting with 842759d97ce7842318fc2d89c5b3d71804170aa974676b036451c3e411cd5946 not found: ID does not exist" containerID="842759d97ce7842318fc2d89c5b3d71804170aa974676b036451c3e411cd5946" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.465275 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"842759d97ce7842318fc2d89c5b3d71804170aa974676b036451c3e411cd5946"} err="failed to get container status \"842759d97ce7842318fc2d89c5b3d71804170aa974676b036451c3e411cd5946\": rpc error: code = NotFound desc = could not find container \"842759d97ce7842318fc2d89c5b3d71804170aa974676b036451c3e411cd5946\": container with ID starting with 842759d97ce7842318fc2d89c5b3d71804170aa974676b036451c3e411cd5946 not found: ID does not exist" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.465323 4768 scope.go:117] "RemoveContainer" containerID="f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022" Nov 24 18:34:11 crc kubenswrapper[4768]: E1124 18:34:11.465822 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022\": container with ID starting with f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022 not found: ID does not exist" containerID="f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.465868 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022"} err="failed to get container status \"f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022\": rpc error: code = NotFound desc = could not find container \"f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022\": container with ID starting with f00e6525b56276d16bef161f47598a189ecdabdd8a675832a737f8eead66e022 not found: ID does not exist" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.920590 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f4daa20-719e-4694-a368-9de45d70e84f" path="/var/lib/kubelet/pods/6f4daa20-719e-4694-a368-9de45d70e84f/volumes" Nov 24 18:34:11 crc kubenswrapper[4768]: I1124 18:34:11.922891 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac32b1d2-20bd-47cd-992f-f668c68a4a86" path="/var/lib/kubelet/pods/ac32b1d2-20bd-47cd-992f-f668c68a4a86/volumes" Nov 24 18:34:18 crc kubenswrapper[4768]: I1124 18:34:18.422788 4768 generic.go:334] "Generic (PLEG): container finished" podID="fd87ee72-91d9-40a2-a95f-f4358b524d8f" containerID="e10685e436548c493b1c818a8fe3f220a532edf9e1b587023a9cd1d5de4ff4e1" exitCode=0 Nov 24 18:34:18 crc kubenswrapper[4768]: I1124 18:34:18.422865 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" event={"ID":"fd87ee72-91d9-40a2-a95f-f4358b524d8f","Type":"ContainerDied","Data":"e10685e436548c493b1c818a8fe3f220a532edf9e1b587023a9cd1d5de4ff4e1"} Nov 24 18:34:19 crc kubenswrapper[4768]: I1124 18:34:19.899551 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:34:19 crc kubenswrapper[4768]: I1124 18:34:19.998915 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ovn-combined-ca-bundle\") pod \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " Nov 24 18:34:19 crc kubenswrapper[4768]: I1124 18:34:19.998994 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ceph\") pod \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " Nov 24 18:34:19 crc kubenswrapper[4768]: I1124 18:34:19.999038 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-inventory\") pod \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " Nov 24 18:34:19 crc kubenswrapper[4768]: I1124 18:34:19.999142 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk4zh\" (UniqueName: \"kubernetes.io/projected/fd87ee72-91d9-40a2-a95f-f4358b524d8f-kube-api-access-nk4zh\") pod \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " Nov 24 18:34:19 crc kubenswrapper[4768]: I1124 18:34:19.999182 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ssh-key\") pod \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " Nov 24 18:34:19 crc kubenswrapper[4768]: I1124 18:34:19.999265 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ovncontroller-config-0\") pod \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\" (UID: \"fd87ee72-91d9-40a2-a95f-f4358b524d8f\") " Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.006143 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd87ee72-91d9-40a2-a95f-f4358b524d8f-kube-api-access-nk4zh" (OuterVolumeSpecName: "kube-api-access-nk4zh") pod "fd87ee72-91d9-40a2-a95f-f4358b524d8f" (UID: "fd87ee72-91d9-40a2-a95f-f4358b524d8f"). InnerVolumeSpecName "kube-api-access-nk4zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.007879 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fd87ee72-91d9-40a2-a95f-f4358b524d8f" (UID: "fd87ee72-91d9-40a2-a95f-f4358b524d8f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.007942 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ceph" (OuterVolumeSpecName: "ceph") pod "fd87ee72-91d9-40a2-a95f-f4358b524d8f" (UID: "fd87ee72-91d9-40a2-a95f-f4358b524d8f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.028088 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "fd87ee72-91d9-40a2-a95f-f4358b524d8f" (UID: "fd87ee72-91d9-40a2-a95f-f4358b524d8f"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.037225 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-inventory" (OuterVolumeSpecName: "inventory") pod "fd87ee72-91d9-40a2-a95f-f4358b524d8f" (UID: "fd87ee72-91d9-40a2-a95f-f4358b524d8f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.047300 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fd87ee72-91d9-40a2-a95f-f4358b524d8f" (UID: "fd87ee72-91d9-40a2-a95f-f4358b524d8f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.102137 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk4zh\" (UniqueName: \"kubernetes.io/projected/fd87ee72-91d9-40a2-a95f-f4358b524d8f-kube-api-access-nk4zh\") on node \"crc\" DevicePath \"\"" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.102190 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.102206 4768 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.102221 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.102234 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.102248 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd87ee72-91d9-40a2-a95f-f4358b524d8f-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.448655 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" event={"ID":"fd87ee72-91d9-40a2-a95f-f4358b524d8f","Type":"ContainerDied","Data":"983db268a8a765161c100c94b3966c0911b226aef58ca646ebfef31f38ff97db"} Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.448712 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="983db268a8a765161c100c94b3966c0911b226aef58ca646ebfef31f38ff97db" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.448721 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zcf2w" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.551816 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz"] Nov 24 18:34:20 crc kubenswrapper[4768]: E1124 18:34:20.552408 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4daa20-719e-4694-a368-9de45d70e84f" containerName="registry-server" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.552480 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4daa20-719e-4694-a368-9de45d70e84f" containerName="registry-server" Nov 24 18:34:20 crc kubenswrapper[4768]: E1124 18:34:20.552653 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd87ee72-91d9-40a2-a95f-f4358b524d8f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.552705 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd87ee72-91d9-40a2-a95f-f4358b524d8f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 18:34:20 crc kubenswrapper[4768]: E1124 18:34:20.552799 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4daa20-719e-4694-a368-9de45d70e84f" containerName="extract-utilities" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.552851 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4daa20-719e-4694-a368-9de45d70e84f" containerName="extract-utilities" Nov 24 18:34:20 crc kubenswrapper[4768]: E1124 18:34:20.552921 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4daa20-719e-4694-a368-9de45d70e84f" containerName="extract-content" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.552991 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4daa20-719e-4694-a368-9de45d70e84f" containerName="extract-content" Nov 24 18:34:20 crc kubenswrapper[4768]: E1124 18:34:20.553069 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac32b1d2-20bd-47cd-992f-f668c68a4a86" containerName="extract-utilities" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.553136 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac32b1d2-20bd-47cd-992f-f668c68a4a86" containerName="extract-utilities" Nov 24 18:34:20 crc kubenswrapper[4768]: E1124 18:34:20.553227 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac32b1d2-20bd-47cd-992f-f668c68a4a86" containerName="extract-content" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.553308 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac32b1d2-20bd-47cd-992f-f668c68a4a86" containerName="extract-content" Nov 24 18:34:20 crc kubenswrapper[4768]: E1124 18:34:20.553395 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac32b1d2-20bd-47cd-992f-f668c68a4a86" containerName="registry-server" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.553460 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac32b1d2-20bd-47cd-992f-f668c68a4a86" containerName="registry-server" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.553719 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd87ee72-91d9-40a2-a95f-f4358b524d8f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.553784 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f4daa20-719e-4694-a368-9de45d70e84f" containerName="registry-server" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.553843 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac32b1d2-20bd-47cd-992f-f668c68a4a86" containerName="registry-server" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.554550 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.558359 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.558428 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.558455 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.558431 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.561139 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.562501 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.564414 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.598997 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz"] Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.715200 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.715268 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.715298 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.715336 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmzjs\" (UniqueName: \"kubernetes.io/projected/edac5bf5-aa67-431e-9e1a-3551d9323772-kube-api-access-xmzjs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.715406 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.715427 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.715957 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.818606 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.819502 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.819567 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.819602 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.819684 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmzjs\" (UniqueName: \"kubernetes.io/projected/edac5bf5-aa67-431e-9e1a-3551d9323772-kube-api-access-xmzjs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.819798 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.819826 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.823204 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.824135 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.824301 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.824543 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.825059 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.825260 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.838757 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmzjs\" (UniqueName: \"kubernetes.io/projected/edac5bf5-aa67-431e-9e1a-3551d9323772-kube-api-access-xmzjs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:20 crc kubenswrapper[4768]: I1124 18:34:20.900654 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:34:21 crc kubenswrapper[4768]: I1124 18:34:21.454546 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz"] Nov 24 18:34:22 crc kubenswrapper[4768]: I1124 18:34:22.472157 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" event={"ID":"edac5bf5-aa67-431e-9e1a-3551d9323772","Type":"ContainerStarted","Data":"a4b56d7e781dd4a3a8f4d80be9015dd44c2cae3fff59a98deb51edd2459d949c"} Nov 24 18:34:22 crc kubenswrapper[4768]: I1124 18:34:22.473273 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" event={"ID":"edac5bf5-aa67-431e-9e1a-3551d9323772","Type":"ContainerStarted","Data":"8bb7be140646000c2962dab2e6aef910db190f604241132e2623901831fd7a77"} Nov 24 18:34:22 crc kubenswrapper[4768]: I1124 18:34:22.498006 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" podStartSLOduration=2.090466075 podStartE2EDuration="2.497981564s" podCreationTimestamp="2025-11-24 18:34:20 +0000 UTC" firstStartedPulling="2025-11-24 18:34:21.468314278 +0000 UTC m=+2700.328896065" lastFinishedPulling="2025-11-24 18:34:21.875829737 +0000 UTC m=+2700.736411554" observedRunningTime="2025-11-24 18:34:22.495372674 +0000 UTC m=+2701.355954461" watchObservedRunningTime="2025-11-24 18:34:22.497981564 +0000 UTC m=+2701.358563361" Nov 24 18:34:43 crc kubenswrapper[4768]: I1124 18:34:43.657332 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:34:43 crc kubenswrapper[4768]: I1124 18:34:43.658166 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:35:13 crc kubenswrapper[4768]: I1124 18:35:13.656880 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:35:13 crc kubenswrapper[4768]: I1124 18:35:13.657418 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:35:22 crc kubenswrapper[4768]: I1124 18:35:22.344173 4768 generic.go:334] "Generic (PLEG): container finished" podID="edac5bf5-aa67-431e-9e1a-3551d9323772" containerID="a4b56d7e781dd4a3a8f4d80be9015dd44c2cae3fff59a98deb51edd2459d949c" exitCode=0 Nov 24 18:35:22 crc kubenswrapper[4768]: I1124 18:35:22.344254 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" event={"ID":"edac5bf5-aa67-431e-9e1a-3551d9323772","Type":"ContainerDied","Data":"a4b56d7e781dd4a3a8f4d80be9015dd44c2cae3fff59a98deb51edd2459d949c"} Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.793130 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.926155 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-neutron-metadata-combined-ca-bundle\") pod \"edac5bf5-aa67-431e-9e1a-3551d9323772\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.926223 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-nova-metadata-neutron-config-0\") pod \"edac5bf5-aa67-431e-9e1a-3551d9323772\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.926249 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-neutron-ovn-metadata-agent-neutron-config-0\") pod \"edac5bf5-aa67-431e-9e1a-3551d9323772\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.926320 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmzjs\" (UniqueName: \"kubernetes.io/projected/edac5bf5-aa67-431e-9e1a-3551d9323772-kube-api-access-xmzjs\") pod \"edac5bf5-aa67-431e-9e1a-3551d9323772\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.926373 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-ssh-key\") pod \"edac5bf5-aa67-431e-9e1a-3551d9323772\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.926526 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-ceph\") pod \"edac5bf5-aa67-431e-9e1a-3551d9323772\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.926657 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-inventory\") pod \"edac5bf5-aa67-431e-9e1a-3551d9323772\" (UID: \"edac5bf5-aa67-431e-9e1a-3551d9323772\") " Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.933610 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-ceph" (OuterVolumeSpecName: "ceph") pod "edac5bf5-aa67-431e-9e1a-3551d9323772" (UID: "edac5bf5-aa67-431e-9e1a-3551d9323772"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.934142 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edac5bf5-aa67-431e-9e1a-3551d9323772-kube-api-access-xmzjs" (OuterVolumeSpecName: "kube-api-access-xmzjs") pod "edac5bf5-aa67-431e-9e1a-3551d9323772" (UID: "edac5bf5-aa67-431e-9e1a-3551d9323772"). InnerVolumeSpecName "kube-api-access-xmzjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.934949 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "edac5bf5-aa67-431e-9e1a-3551d9323772" (UID: "edac5bf5-aa67-431e-9e1a-3551d9323772"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.956234 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-inventory" (OuterVolumeSpecName: "inventory") pod "edac5bf5-aa67-431e-9e1a-3551d9323772" (UID: "edac5bf5-aa67-431e-9e1a-3551d9323772"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.957104 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "edac5bf5-aa67-431e-9e1a-3551d9323772" (UID: "edac5bf5-aa67-431e-9e1a-3551d9323772"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.959531 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "edac5bf5-aa67-431e-9e1a-3551d9323772" (UID: "edac5bf5-aa67-431e-9e1a-3551d9323772"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:35:23 crc kubenswrapper[4768]: I1124 18:35:23.983296 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "edac5bf5-aa67-431e-9e1a-3551d9323772" (UID: "edac5bf5-aa67-431e-9e1a-3551d9323772"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.029467 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.029557 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.029581 4768 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.029603 4768 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.029622 4768 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.029643 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmzjs\" (UniqueName: \"kubernetes.io/projected/edac5bf5-aa67-431e-9e1a-3551d9323772-kube-api-access-xmzjs\") on node \"crc\" DevicePath \"\"" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.029661 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edac5bf5-aa67-431e-9e1a-3551d9323772-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.370520 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" event={"ID":"edac5bf5-aa67-431e-9e1a-3551d9323772","Type":"ContainerDied","Data":"8bb7be140646000c2962dab2e6aef910db190f604241132e2623901831fd7a77"} Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.370582 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb7be140646000c2962dab2e6aef910db190f604241132e2623901831fd7a77" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.370600 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.487610 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk"] Nov 24 18:35:24 crc kubenswrapper[4768]: E1124 18:35:24.487971 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edac5bf5-aa67-431e-9e1a-3551d9323772" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.487990 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="edac5bf5-aa67-431e-9e1a-3551d9323772" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.488166 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="edac5bf5-aa67-431e-9e1a-3551d9323772" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.488740 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.492044 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.492847 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.493029 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.493077 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.493182 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.493306 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.507257 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk"] Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.540362 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht7lk\" (UniqueName: \"kubernetes.io/projected/ad4a499f-9065-421e-9c19-6b6ae06f255e-kube-api-access-ht7lk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.540437 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.540528 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.540565 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.540599 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.540636 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.643060 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht7lk\" (UniqueName: \"kubernetes.io/projected/ad4a499f-9065-421e-9c19-6b6ae06f255e-kube-api-access-ht7lk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.643168 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.643234 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.643287 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.643330 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.643425 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.647884 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.649037 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.649706 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.650158 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.651290 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.666213 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht7lk\" (UniqueName: \"kubernetes.io/projected/ad4a499f-9065-421e-9c19-6b6ae06f255e-kube-api-access-ht7lk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:24 crc kubenswrapper[4768]: I1124 18:35:24.810760 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:35:25 crc kubenswrapper[4768]: I1124 18:35:25.383806 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk"] Nov 24 18:35:25 crc kubenswrapper[4768]: I1124 18:35:25.389070 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 18:35:26 crc kubenswrapper[4768]: I1124 18:35:26.394103 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" event={"ID":"ad4a499f-9065-421e-9c19-6b6ae06f255e","Type":"ContainerStarted","Data":"3437ca78c4538d5d391226b88524e9c30b04509151bc6a160fe1f2586ba40dcc"} Nov 24 18:35:26 crc kubenswrapper[4768]: I1124 18:35:26.394467 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" event={"ID":"ad4a499f-9065-421e-9c19-6b6ae06f255e","Type":"ContainerStarted","Data":"fe2e26d50621e52b3dae13efad3dab10561b45ada4fcf43855859bc1afd41788"} Nov 24 18:35:26 crc kubenswrapper[4768]: I1124 18:35:26.415218 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" podStartSLOduration=2.019432725 podStartE2EDuration="2.415191707s" podCreationTimestamp="2025-11-24 18:35:24 +0000 UTC" firstStartedPulling="2025-11-24 18:35:25.38884011 +0000 UTC m=+2764.249421877" lastFinishedPulling="2025-11-24 18:35:25.784599072 +0000 UTC m=+2764.645180859" observedRunningTime="2025-11-24 18:35:26.408527837 +0000 UTC m=+2765.269109624" watchObservedRunningTime="2025-11-24 18:35:26.415191707 +0000 UTC m=+2765.275773524" Nov 24 18:35:43 crc kubenswrapper[4768]: I1124 18:35:43.656229 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:35:43 crc kubenswrapper[4768]: I1124 18:35:43.656954 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:35:43 crc kubenswrapper[4768]: I1124 18:35:43.657025 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 18:35:43 crc kubenswrapper[4768]: I1124 18:35:43.657906 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1313295cd9893d7f5d85adba62cfd4ff2acd960ce0acf6f5d9782bb54fd8f87"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 18:35:43 crc kubenswrapper[4768]: I1124 18:35:43.658007 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://d1313295cd9893d7f5d85adba62cfd4ff2acd960ce0acf6f5d9782bb54fd8f87" gracePeriod=600 Nov 24 18:35:44 crc kubenswrapper[4768]: I1124 18:35:44.577744 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="d1313295cd9893d7f5d85adba62cfd4ff2acd960ce0acf6f5d9782bb54fd8f87" exitCode=0 Nov 24 18:35:44 crc kubenswrapper[4768]: I1124 18:35:44.577834 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"d1313295cd9893d7f5d85adba62cfd4ff2acd960ce0acf6f5d9782bb54fd8f87"} Nov 24 18:35:44 crc kubenswrapper[4768]: I1124 18:35:44.578390 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a"} Nov 24 18:35:44 crc kubenswrapper[4768]: I1124 18:35:44.578431 4768 scope.go:117] "RemoveContainer" containerID="8ea1109d97ea54812a85db08dd6b547f84e3f2aede6c5283c162b2d33d2cad59" Nov 24 18:37:43 crc kubenswrapper[4768]: I1124 18:37:43.656313 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:37:43 crc kubenswrapper[4768]: I1124 18:37:43.657376 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:38:13 crc kubenswrapper[4768]: I1124 18:38:13.656826 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:38:13 crc kubenswrapper[4768]: I1124 18:38:13.657634 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:38:43 crc kubenswrapper[4768]: I1124 18:38:43.656175 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:38:43 crc kubenswrapper[4768]: I1124 18:38:43.656796 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:38:43 crc kubenswrapper[4768]: I1124 18:38:43.656862 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 18:38:43 crc kubenswrapper[4768]: I1124 18:38:43.657827 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 18:38:43 crc kubenswrapper[4768]: I1124 18:38:43.657923 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" gracePeriod=600 Nov 24 18:38:43 crc kubenswrapper[4768]: E1124 18:38:43.810869 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:38:44 crc kubenswrapper[4768]: I1124 18:38:44.567078 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" exitCode=0 Nov 24 18:38:44 crc kubenswrapper[4768]: I1124 18:38:44.567194 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a"} Nov 24 18:38:44 crc kubenswrapper[4768]: I1124 18:38:44.567308 4768 scope.go:117] "RemoveContainer" containerID="d1313295cd9893d7f5d85adba62cfd4ff2acd960ce0acf6f5d9782bb54fd8f87" Nov 24 18:38:44 crc kubenswrapper[4768]: I1124 18:38:44.568184 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:38:44 crc kubenswrapper[4768]: E1124 18:38:44.568658 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:38:56 crc kubenswrapper[4768]: I1124 18:38:56.899365 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:38:56 crc kubenswrapper[4768]: E1124 18:38:56.900756 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:39:08 crc kubenswrapper[4768]: I1124 18:39:08.898847 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:39:08 crc kubenswrapper[4768]: E1124 18:39:08.899902 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:39:22 crc kubenswrapper[4768]: I1124 18:39:22.898797 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:39:22 crc kubenswrapper[4768]: E1124 18:39:22.900081 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:39:35 crc kubenswrapper[4768]: I1124 18:39:35.898469 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:39:35 crc kubenswrapper[4768]: E1124 18:39:35.899413 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:39:48 crc kubenswrapper[4768]: I1124 18:39:48.899452 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:39:48 crc kubenswrapper[4768]: E1124 18:39:48.901471 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:40:01 crc kubenswrapper[4768]: I1124 18:40:01.905235 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:40:01 crc kubenswrapper[4768]: E1124 18:40:01.906102 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:40:03 crc kubenswrapper[4768]: I1124 18:40:03.380714 4768 generic.go:334] "Generic (PLEG): container finished" podID="ad4a499f-9065-421e-9c19-6b6ae06f255e" containerID="3437ca78c4538d5d391226b88524e9c30b04509151bc6a160fe1f2586ba40dcc" exitCode=0 Nov 24 18:40:03 crc kubenswrapper[4768]: I1124 18:40:03.380753 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" event={"ID":"ad4a499f-9065-421e-9c19-6b6ae06f255e","Type":"ContainerDied","Data":"3437ca78c4538d5d391226b88524e9c30b04509151bc6a160fe1f2586ba40dcc"} Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.779573 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.885012 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-ssh-key\") pod \"ad4a499f-9065-421e-9c19-6b6ae06f255e\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.885093 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-ceph\") pod \"ad4a499f-9065-421e-9c19-6b6ae06f255e\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.885198 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-libvirt-combined-ca-bundle\") pod \"ad4a499f-9065-421e-9c19-6b6ae06f255e\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.885262 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht7lk\" (UniqueName: \"kubernetes.io/projected/ad4a499f-9065-421e-9c19-6b6ae06f255e-kube-api-access-ht7lk\") pod \"ad4a499f-9065-421e-9c19-6b6ae06f255e\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.885308 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-libvirt-secret-0\") pod \"ad4a499f-9065-421e-9c19-6b6ae06f255e\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.885364 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-inventory\") pod \"ad4a499f-9065-421e-9c19-6b6ae06f255e\" (UID: \"ad4a499f-9065-421e-9c19-6b6ae06f255e\") " Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.891991 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-ceph" (OuterVolumeSpecName: "ceph") pod "ad4a499f-9065-421e-9c19-6b6ae06f255e" (UID: "ad4a499f-9065-421e-9c19-6b6ae06f255e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.892377 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ad4a499f-9065-421e-9c19-6b6ae06f255e" (UID: "ad4a499f-9065-421e-9c19-6b6ae06f255e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.892405 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad4a499f-9065-421e-9c19-6b6ae06f255e-kube-api-access-ht7lk" (OuterVolumeSpecName: "kube-api-access-ht7lk") pod "ad4a499f-9065-421e-9c19-6b6ae06f255e" (UID: "ad4a499f-9065-421e-9c19-6b6ae06f255e"). InnerVolumeSpecName "kube-api-access-ht7lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.912795 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "ad4a499f-9065-421e-9c19-6b6ae06f255e" (UID: "ad4a499f-9065-421e-9c19-6b6ae06f255e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.939687 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-inventory" (OuterVolumeSpecName: "inventory") pod "ad4a499f-9065-421e-9c19-6b6ae06f255e" (UID: "ad4a499f-9065-421e-9c19-6b6ae06f255e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.940125 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ad4a499f-9065-421e-9c19-6b6ae06f255e" (UID: "ad4a499f-9065-421e-9c19-6b6ae06f255e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.987331 4768 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.987363 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht7lk\" (UniqueName: \"kubernetes.io/projected/ad4a499f-9065-421e-9c19-6b6ae06f255e-kube-api-access-ht7lk\") on node \"crc\" DevicePath \"\"" Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.987375 4768 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.987386 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.987395 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:40:04 crc kubenswrapper[4768]: I1124 18:40:04.987406 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ad4a499f-9065-421e-9c19-6b6ae06f255e-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.402214 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" event={"ID":"ad4a499f-9065-421e-9c19-6b6ae06f255e","Type":"ContainerDied","Data":"fe2e26d50621e52b3dae13efad3dab10561b45ada4fcf43855859bc1afd41788"} Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.402270 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe2e26d50621e52b3dae13efad3dab10561b45ada4fcf43855859bc1afd41788" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.402308 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.518915 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz"] Nov 24 18:40:05 crc kubenswrapper[4768]: E1124 18:40:05.519584 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4a499f-9065-421e-9c19-6b6ae06f255e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.519603 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4a499f-9065-421e-9c19-6b6ae06f255e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.519786 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad4a499f-9065-421e-9c19-6b6ae06f255e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.520459 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.524689 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.525586 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.526778 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pxggh" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.526868 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.526798 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.527568 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.527697 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.528010 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.528290 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.531702 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz"] Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.700395 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.700456 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.700605 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j78d5\" (UniqueName: \"kubernetes.io/projected/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-kube-api-access-j78d5\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.700897 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.700933 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.700960 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.701006 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.701029 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.701070 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.701107 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.701144 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.803875 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.804575 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.804646 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.804703 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.804775 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.804854 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.804891 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.805040 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j78d5\" (UniqueName: \"kubernetes.io/projected/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-kube-api-access-j78d5\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.805070 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.805089 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.805119 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.805732 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.805983 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.809750 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.809825 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.809866 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.810343 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.813085 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.813117 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.813942 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.814875 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.839265 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j78d5\" (UniqueName: \"kubernetes.io/projected/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-kube-api-access-j78d5\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:05 crc kubenswrapper[4768]: I1124 18:40:05.843126 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:40:06 crc kubenswrapper[4768]: I1124 18:40:06.399116 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz"] Nov 24 18:40:06 crc kubenswrapper[4768]: W1124 18:40:06.411979 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd99c2dc_4b0c_49e8_bc2e_59a8ad923066.slice/crio-5aee8d2f6c56e2f547f9bfc41fa1e12661731955cf59089056e1a211f8952bd1 WatchSource:0}: Error finding container 5aee8d2f6c56e2f547f9bfc41fa1e12661731955cf59089056e1a211f8952bd1: Status 404 returned error can't find the container with id 5aee8d2f6c56e2f547f9bfc41fa1e12661731955cf59089056e1a211f8952bd1 Nov 24 18:40:07 crc kubenswrapper[4768]: I1124 18:40:07.427179 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" event={"ID":"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066","Type":"ContainerStarted","Data":"8531dbd84ef67bd5df039c023208a044ab2073125d007a33b5b827f149333e82"} Nov 24 18:40:07 crc kubenswrapper[4768]: I1124 18:40:07.427575 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" event={"ID":"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066","Type":"ContainerStarted","Data":"5aee8d2f6c56e2f547f9bfc41fa1e12661731955cf59089056e1a211f8952bd1"} Nov 24 18:40:07 crc kubenswrapper[4768]: I1124 18:40:07.460137 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" podStartSLOduration=1.921493654 podStartE2EDuration="2.460120393s" podCreationTimestamp="2025-11-24 18:40:05 +0000 UTC" firstStartedPulling="2025-11-24 18:40:06.416748508 +0000 UTC m=+3045.277330285" lastFinishedPulling="2025-11-24 18:40:06.955375237 +0000 UTC m=+3045.815957024" observedRunningTime="2025-11-24 18:40:07.459416954 +0000 UTC m=+3046.319998771" watchObservedRunningTime="2025-11-24 18:40:07.460120393 +0000 UTC m=+3046.320702170" Nov 24 18:40:13 crc kubenswrapper[4768]: I1124 18:40:13.898552 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:40:13 crc kubenswrapper[4768]: E1124 18:40:13.899938 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:40:24 crc kubenswrapper[4768]: I1124 18:40:24.898523 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:40:24 crc kubenswrapper[4768]: E1124 18:40:24.899293 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:40:37 crc kubenswrapper[4768]: I1124 18:40:37.898342 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:40:37 crc kubenswrapper[4768]: E1124 18:40:37.899352 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:40:50 crc kubenswrapper[4768]: I1124 18:40:50.898724 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:40:50 crc kubenswrapper[4768]: E1124 18:40:50.899780 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:41:05 crc kubenswrapper[4768]: I1124 18:41:05.899929 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:41:05 crc kubenswrapper[4768]: E1124 18:41:05.900985 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:41:16 crc kubenswrapper[4768]: I1124 18:41:16.899084 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:41:16 crc kubenswrapper[4768]: E1124 18:41:16.899996 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:41:29 crc kubenswrapper[4768]: I1124 18:41:29.899537 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:41:29 crc kubenswrapper[4768]: E1124 18:41:29.901212 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:41:40 crc kubenswrapper[4768]: I1124 18:41:40.898418 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:41:40 crc kubenswrapper[4768]: E1124 18:41:40.899364 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:41:54 crc kubenswrapper[4768]: I1124 18:41:54.870225 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-phkw2"] Nov 24 18:41:54 crc kubenswrapper[4768]: I1124 18:41:54.876378 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:41:54 crc kubenswrapper[4768]: I1124 18:41:54.885097 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phkw2"] Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.016857 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-utilities\") pod \"redhat-marketplace-phkw2\" (UID: \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\") " pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.017830 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d7jg\" (UniqueName: \"kubernetes.io/projected/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-kube-api-access-9d7jg\") pod \"redhat-marketplace-phkw2\" (UID: \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\") " pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.018007 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-catalog-content\") pod \"redhat-marketplace-phkw2\" (UID: \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\") " pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.119711 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d7jg\" (UniqueName: \"kubernetes.io/projected/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-kube-api-access-9d7jg\") pod \"redhat-marketplace-phkw2\" (UID: \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\") " pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.119813 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-catalog-content\") pod \"redhat-marketplace-phkw2\" (UID: \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\") " pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.119851 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-utilities\") pod \"redhat-marketplace-phkw2\" (UID: \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\") " pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.120474 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-catalog-content\") pod \"redhat-marketplace-phkw2\" (UID: \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\") " pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.120539 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-utilities\") pod \"redhat-marketplace-phkw2\" (UID: \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\") " pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.141870 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d7jg\" (UniqueName: \"kubernetes.io/projected/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-kube-api-access-9d7jg\") pod \"redhat-marketplace-phkw2\" (UID: \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\") " pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.198629 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.675997 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phkw2"] Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.702667 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phkw2" event={"ID":"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd","Type":"ContainerStarted","Data":"a484b7c07a4c62621b0e974d5b770d5f33b4628e48afd06beacf1e0b935671c4"} Nov 24 18:41:55 crc kubenswrapper[4768]: I1124 18:41:55.899643 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:41:55 crc kubenswrapper[4768]: E1124 18:41:55.901801 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:41:56 crc kubenswrapper[4768]: I1124 18:41:56.725006 4768 generic.go:334] "Generic (PLEG): container finished" podID="c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" containerID="d4bb9599dcb28126959b0613e504a21d91be4fbe899d62897fa271df83d09bcb" exitCode=0 Nov 24 18:41:56 crc kubenswrapper[4768]: I1124 18:41:56.725070 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phkw2" event={"ID":"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd","Type":"ContainerDied","Data":"d4bb9599dcb28126959b0613e504a21d91be4fbe899d62897fa271df83d09bcb"} Nov 24 18:41:56 crc kubenswrapper[4768]: I1124 18:41:56.729694 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 18:41:58 crc kubenswrapper[4768]: I1124 18:41:58.746463 4768 generic.go:334] "Generic (PLEG): container finished" podID="c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" containerID="430242d0346dbcfc31f420b865fca5b10972e78f769f70f93d753bfafa757498" exitCode=0 Nov 24 18:41:58 crc kubenswrapper[4768]: I1124 18:41:58.746581 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phkw2" event={"ID":"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd","Type":"ContainerDied","Data":"430242d0346dbcfc31f420b865fca5b10972e78f769f70f93d753bfafa757498"} Nov 24 18:41:59 crc kubenswrapper[4768]: I1124 18:41:59.762763 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phkw2" event={"ID":"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd","Type":"ContainerStarted","Data":"e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e"} Nov 24 18:41:59 crc kubenswrapper[4768]: I1124 18:41:59.792631 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-phkw2" podStartSLOduration=3.308281993 podStartE2EDuration="5.79260379s" podCreationTimestamp="2025-11-24 18:41:54 +0000 UTC" firstStartedPulling="2025-11-24 18:41:56.72909451 +0000 UTC m=+3155.589676297" lastFinishedPulling="2025-11-24 18:41:59.213416317 +0000 UTC m=+3158.073998094" observedRunningTime="2025-11-24 18:41:59.783256097 +0000 UTC m=+3158.643837894" watchObservedRunningTime="2025-11-24 18:41:59.79260379 +0000 UTC m=+3158.653185577" Nov 24 18:42:05 crc kubenswrapper[4768]: I1124 18:42:05.198737 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:42:05 crc kubenswrapper[4768]: I1124 18:42:05.199357 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:42:05 crc kubenswrapper[4768]: I1124 18:42:05.253220 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:42:05 crc kubenswrapper[4768]: I1124 18:42:05.913449 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:42:05 crc kubenswrapper[4768]: I1124 18:42:05.974243 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phkw2"] Nov 24 18:42:07 crc kubenswrapper[4768]: I1124 18:42:07.844407 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-phkw2" podUID="c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" containerName="registry-server" containerID="cri-o://e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e" gracePeriod=2 Nov 24 18:42:07 crc kubenswrapper[4768]: I1124 18:42:07.900245 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:42:07 crc kubenswrapper[4768]: E1124 18:42:07.901070 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.331191 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.434963 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-utilities\") pod \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\" (UID: \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\") " Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.435542 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-catalog-content\") pod \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\" (UID: \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\") " Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.435699 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d7jg\" (UniqueName: \"kubernetes.io/projected/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-kube-api-access-9d7jg\") pod \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\" (UID: \"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd\") " Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.436066 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-utilities" (OuterVolumeSpecName: "utilities") pod "c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" (UID: "c4cce511-9003-4ffa-8eda-b6f1e31a8fdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.436425 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.444125 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-kube-api-access-9d7jg" (OuterVolumeSpecName: "kube-api-access-9d7jg") pod "c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" (UID: "c4cce511-9003-4ffa-8eda-b6f1e31a8fdd"). InnerVolumeSpecName "kube-api-access-9d7jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.463511 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" (UID: "c4cce511-9003-4ffa-8eda-b6f1e31a8fdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.537801 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.537842 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d7jg\" (UniqueName: \"kubernetes.io/projected/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd-kube-api-access-9d7jg\") on node \"crc\" DevicePath \"\"" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.853670 4768 generic.go:334] "Generic (PLEG): container finished" podID="c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" containerID="e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e" exitCode=0 Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.853724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phkw2" event={"ID":"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd","Type":"ContainerDied","Data":"e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e"} Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.853802 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phkw2" event={"ID":"c4cce511-9003-4ffa-8eda-b6f1e31a8fdd","Type":"ContainerDied","Data":"a484b7c07a4c62621b0e974d5b770d5f33b4628e48afd06beacf1e0b935671c4"} Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.853832 4768 scope.go:117] "RemoveContainer" containerID="e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.854280 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phkw2" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.879828 4768 scope.go:117] "RemoveContainer" containerID="430242d0346dbcfc31f420b865fca5b10972e78f769f70f93d753bfafa757498" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.896899 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phkw2"] Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.906005 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-phkw2"] Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.917022 4768 scope.go:117] "RemoveContainer" containerID="d4bb9599dcb28126959b0613e504a21d91be4fbe899d62897fa271df83d09bcb" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.954000 4768 scope.go:117] "RemoveContainer" containerID="e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e" Nov 24 18:42:08 crc kubenswrapper[4768]: E1124 18:42:08.954890 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e\": container with ID starting with e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e not found: ID does not exist" containerID="e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.954941 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e"} err="failed to get container status \"e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e\": rpc error: code = NotFound desc = could not find container \"e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e\": container with ID starting with e9449bc84687d70e73cc813b648508c2d0fbde1d394f4b3ad7c5af5a0b10589e not found: ID does not exist" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.954977 4768 scope.go:117] "RemoveContainer" containerID="430242d0346dbcfc31f420b865fca5b10972e78f769f70f93d753bfafa757498" Nov 24 18:42:08 crc kubenswrapper[4768]: E1124 18:42:08.955467 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430242d0346dbcfc31f420b865fca5b10972e78f769f70f93d753bfafa757498\": container with ID starting with 430242d0346dbcfc31f420b865fca5b10972e78f769f70f93d753bfafa757498 not found: ID does not exist" containerID="430242d0346dbcfc31f420b865fca5b10972e78f769f70f93d753bfafa757498" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.955551 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430242d0346dbcfc31f420b865fca5b10972e78f769f70f93d753bfafa757498"} err="failed to get container status \"430242d0346dbcfc31f420b865fca5b10972e78f769f70f93d753bfafa757498\": rpc error: code = NotFound desc = could not find container \"430242d0346dbcfc31f420b865fca5b10972e78f769f70f93d753bfafa757498\": container with ID starting with 430242d0346dbcfc31f420b865fca5b10972e78f769f70f93d753bfafa757498 not found: ID does not exist" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.955577 4768 scope.go:117] "RemoveContainer" containerID="d4bb9599dcb28126959b0613e504a21d91be4fbe899d62897fa271df83d09bcb" Nov 24 18:42:08 crc kubenswrapper[4768]: E1124 18:42:08.955963 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4bb9599dcb28126959b0613e504a21d91be4fbe899d62897fa271df83d09bcb\": container with ID starting with d4bb9599dcb28126959b0613e504a21d91be4fbe899d62897fa271df83d09bcb not found: ID does not exist" containerID="d4bb9599dcb28126959b0613e504a21d91be4fbe899d62897fa271df83d09bcb" Nov 24 18:42:08 crc kubenswrapper[4768]: I1124 18:42:08.955993 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4bb9599dcb28126959b0613e504a21d91be4fbe899d62897fa271df83d09bcb"} err="failed to get container status \"d4bb9599dcb28126959b0613e504a21d91be4fbe899d62897fa271df83d09bcb\": rpc error: code = NotFound desc = could not find container \"d4bb9599dcb28126959b0613e504a21d91be4fbe899d62897fa271df83d09bcb\": container with ID starting with d4bb9599dcb28126959b0613e504a21d91be4fbe899d62897fa271df83d09bcb not found: ID does not exist" Nov 24 18:42:09 crc kubenswrapper[4768]: I1124 18:42:09.916705 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" path="/var/lib/kubelet/pods/c4cce511-9003-4ffa-8eda-b6f1e31a8fdd/volumes" Nov 24 18:42:18 crc kubenswrapper[4768]: I1124 18:42:18.898730 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:42:18 crc kubenswrapper[4768]: E1124 18:42:18.899379 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:42:31 crc kubenswrapper[4768]: I1124 18:42:31.913789 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:42:31 crc kubenswrapper[4768]: E1124 18:42:31.915064 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:42:42 crc kubenswrapper[4768]: I1124 18:42:42.898961 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:42:42 crc kubenswrapper[4768]: E1124 18:42:42.900228 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:42:56 crc kubenswrapper[4768]: I1124 18:42:56.899658 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:42:56 crc kubenswrapper[4768]: E1124 18:42:56.901110 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.136916 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pn2ls"] Nov 24 18:43:03 crc kubenswrapper[4768]: E1124 18:43:03.138552 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" containerName="extract-utilities" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.138573 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" containerName="extract-utilities" Nov 24 18:43:03 crc kubenswrapper[4768]: E1124 18:43:03.138606 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" containerName="extract-content" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.138613 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" containerName="extract-content" Nov 24 18:43:03 crc kubenswrapper[4768]: E1124 18:43:03.138647 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" containerName="registry-server" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.138655 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" containerName="registry-server" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.138859 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4cce511-9003-4ffa-8eda-b6f1e31a8fdd" containerName="registry-server" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.141221 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.161172 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pn2ls"] Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.199403 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11ff884-01ca-453c-96c5-2fdff76cde0c-catalog-content\") pod \"redhat-operators-pn2ls\" (UID: \"a11ff884-01ca-453c-96c5-2fdff76cde0c\") " pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.199541 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11ff884-01ca-453c-96c5-2fdff76cde0c-utilities\") pod \"redhat-operators-pn2ls\" (UID: \"a11ff884-01ca-453c-96c5-2fdff76cde0c\") " pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.199617 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh768\" (UniqueName: \"kubernetes.io/projected/a11ff884-01ca-453c-96c5-2fdff76cde0c-kube-api-access-sh768\") pod \"redhat-operators-pn2ls\" (UID: \"a11ff884-01ca-453c-96c5-2fdff76cde0c\") " pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.302559 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11ff884-01ca-453c-96c5-2fdff76cde0c-catalog-content\") pod \"redhat-operators-pn2ls\" (UID: \"a11ff884-01ca-453c-96c5-2fdff76cde0c\") " pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.303242 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11ff884-01ca-453c-96c5-2fdff76cde0c-utilities\") pod \"redhat-operators-pn2ls\" (UID: \"a11ff884-01ca-453c-96c5-2fdff76cde0c\") " pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.303253 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11ff884-01ca-453c-96c5-2fdff76cde0c-catalog-content\") pod \"redhat-operators-pn2ls\" (UID: \"a11ff884-01ca-453c-96c5-2fdff76cde0c\") " pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.303346 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh768\" (UniqueName: \"kubernetes.io/projected/a11ff884-01ca-453c-96c5-2fdff76cde0c-kube-api-access-sh768\") pod \"redhat-operators-pn2ls\" (UID: \"a11ff884-01ca-453c-96c5-2fdff76cde0c\") " pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.303594 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11ff884-01ca-453c-96c5-2fdff76cde0c-utilities\") pod \"redhat-operators-pn2ls\" (UID: \"a11ff884-01ca-453c-96c5-2fdff76cde0c\") " pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.335092 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh768\" (UniqueName: \"kubernetes.io/projected/a11ff884-01ca-453c-96c5-2fdff76cde0c-kube-api-access-sh768\") pod \"redhat-operators-pn2ls\" (UID: \"a11ff884-01ca-453c-96c5-2fdff76cde0c\") " pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.487148 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:03 crc kubenswrapper[4768]: I1124 18:43:03.965184 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pn2ls"] Nov 24 18:43:04 crc kubenswrapper[4768]: I1124 18:43:04.437662 4768 generic.go:334] "Generic (PLEG): container finished" podID="a11ff884-01ca-453c-96c5-2fdff76cde0c" containerID="19b7f7f94da0454102155d81a4f0d3f9e083ad40ec106ef417b4f80ab5000603" exitCode=0 Nov 24 18:43:04 crc kubenswrapper[4768]: I1124 18:43:04.438099 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn2ls" event={"ID":"a11ff884-01ca-453c-96c5-2fdff76cde0c","Type":"ContainerDied","Data":"19b7f7f94da0454102155d81a4f0d3f9e083ad40ec106ef417b4f80ab5000603"} Nov 24 18:43:04 crc kubenswrapper[4768]: I1124 18:43:04.438143 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn2ls" event={"ID":"a11ff884-01ca-453c-96c5-2fdff76cde0c","Type":"ContainerStarted","Data":"c6158f5ecb33c0faa27cc5ad678815f293629be4e493b9365c6cbfb8824aaed8"} Nov 24 18:43:06 crc kubenswrapper[4768]: I1124 18:43:06.470144 4768 generic.go:334] "Generic (PLEG): container finished" podID="a11ff884-01ca-453c-96c5-2fdff76cde0c" containerID="9e0338cef2945aa4ecfcd5ff48825a77fffbe64fbde83d5b075cf4ca2214912e" exitCode=0 Nov 24 18:43:06 crc kubenswrapper[4768]: I1124 18:43:06.470233 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn2ls" event={"ID":"a11ff884-01ca-453c-96c5-2fdff76cde0c","Type":"ContainerDied","Data":"9e0338cef2945aa4ecfcd5ff48825a77fffbe64fbde83d5b075cf4ca2214912e"} Nov 24 18:43:07 crc kubenswrapper[4768]: I1124 18:43:07.486525 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn2ls" event={"ID":"a11ff884-01ca-453c-96c5-2fdff76cde0c","Type":"ContainerStarted","Data":"7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b"} Nov 24 18:43:07 crc kubenswrapper[4768]: I1124 18:43:07.489344 4768 generic.go:334] "Generic (PLEG): container finished" podID="fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" containerID="8531dbd84ef67bd5df039c023208a044ab2073125d007a33b5b827f149333e82" exitCode=0 Nov 24 18:43:07 crc kubenswrapper[4768]: I1124 18:43:07.489390 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" event={"ID":"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066","Type":"ContainerDied","Data":"8531dbd84ef67bd5df039c023208a044ab2073125d007a33b5b827f149333e82"} Nov 24 18:43:07 crc kubenswrapper[4768]: I1124 18:43:07.524514 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pn2ls" podStartSLOduration=1.987811776 podStartE2EDuration="4.524477563s" podCreationTimestamp="2025-11-24 18:43:03 +0000 UTC" firstStartedPulling="2025-11-24 18:43:04.442801454 +0000 UTC m=+3223.303383251" lastFinishedPulling="2025-11-24 18:43:06.979467251 +0000 UTC m=+3225.840049038" observedRunningTime="2025-11-24 18:43:07.519417526 +0000 UTC m=+3226.379999363" watchObservedRunningTime="2025-11-24 18:43:07.524477563 +0000 UTC m=+3226.385059340" Nov 24 18:43:08 crc kubenswrapper[4768]: I1124 18:43:08.899826 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:43:08 crc kubenswrapper[4768]: E1124 18:43:08.900683 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:43:08 crc kubenswrapper[4768]: I1124 18:43:08.972600 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.135319 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j78d5\" (UniqueName: \"kubernetes.io/projected/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-kube-api-access-j78d5\") pod \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.135421 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ceph\") pod \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.135567 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ssh-key\") pod \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.135608 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-migration-ssh-key-0\") pod \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.135647 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-cell1-compute-config-0\") pod \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.135686 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-custom-ceph-combined-ca-bundle\") pod \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.135746 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ceph-nova-0\") pod \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.135855 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-cell1-compute-config-1\") pod \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.135982 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-inventory\") pod \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.136449 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-extra-config-0\") pod \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.136539 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-migration-ssh-key-1\") pod \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\" (UID: \"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066\") " Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.157689 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-kube-api-access-j78d5" (OuterVolumeSpecName: "kube-api-access-j78d5") pod "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" (UID: "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066"). InnerVolumeSpecName "kube-api-access-j78d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.166043 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" (UID: "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.169479 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ceph" (OuterVolumeSpecName: "ceph") pod "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" (UID: "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.180160 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-inventory" (OuterVolumeSpecName: "inventory") pod "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" (UID: "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.186372 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" (UID: "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.196946 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" (UID: "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.197075 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" (UID: "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.197898 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" (UID: "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.200999 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" (UID: "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.203765 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" (UID: "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.204217 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" (UID: "fd99c2dc-4b0c-49e8-bc2e-59a8ad923066"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.244453 4768 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.244532 4768 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.244554 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j78d5\" (UniqueName: \"kubernetes.io/projected/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-kube-api-access-j78d5\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.244574 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.244591 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.244606 4768 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.244619 4768 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.244633 4768 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.244646 4768 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.244658 4768 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.244670 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd99c2dc-4b0c-49e8-bc2e-59a8ad923066-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.514263 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" event={"ID":"fd99c2dc-4b0c-49e8-bc2e-59a8ad923066","Type":"ContainerDied","Data":"5aee8d2f6c56e2f547f9bfc41fa1e12661731955cf59089056e1a211f8952bd1"} Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.514797 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aee8d2f6c56e2f547f9bfc41fa1e12661731955cf59089056e1a211f8952bd1" Nov 24 18:43:09 crc kubenswrapper[4768]: I1124 18:43:09.514399 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz" Nov 24 18:43:13 crc kubenswrapper[4768]: I1124 18:43:13.488205 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:13 crc kubenswrapper[4768]: I1124 18:43:13.488832 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:13 crc kubenswrapper[4768]: I1124 18:43:13.560313 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:13 crc kubenswrapper[4768]: I1124 18:43:13.619196 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:13 crc kubenswrapper[4768]: I1124 18:43:13.803149 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pn2ls"] Nov 24 18:43:15 crc kubenswrapper[4768]: I1124 18:43:15.577124 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pn2ls" podUID="a11ff884-01ca-453c-96c5-2fdff76cde0c" containerName="registry-server" containerID="cri-o://7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b" gracePeriod=2 Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.466438 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.598036 4768 generic.go:334] "Generic (PLEG): container finished" podID="a11ff884-01ca-453c-96c5-2fdff76cde0c" containerID="7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b" exitCode=0 Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.598083 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn2ls" event={"ID":"a11ff884-01ca-453c-96c5-2fdff76cde0c","Type":"ContainerDied","Data":"7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b"} Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.598115 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn2ls" event={"ID":"a11ff884-01ca-453c-96c5-2fdff76cde0c","Type":"ContainerDied","Data":"c6158f5ecb33c0faa27cc5ad678815f293629be4e493b9365c6cbfb8824aaed8"} Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.598125 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn2ls" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.598137 4768 scope.go:117] "RemoveContainer" containerID="7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.625461 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh768\" (UniqueName: \"kubernetes.io/projected/a11ff884-01ca-453c-96c5-2fdff76cde0c-kube-api-access-sh768\") pod \"a11ff884-01ca-453c-96c5-2fdff76cde0c\" (UID: \"a11ff884-01ca-453c-96c5-2fdff76cde0c\") " Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.625528 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11ff884-01ca-453c-96c5-2fdff76cde0c-catalog-content\") pod \"a11ff884-01ca-453c-96c5-2fdff76cde0c\" (UID: \"a11ff884-01ca-453c-96c5-2fdff76cde0c\") " Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.625554 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11ff884-01ca-453c-96c5-2fdff76cde0c-utilities\") pod \"a11ff884-01ca-453c-96c5-2fdff76cde0c\" (UID: \"a11ff884-01ca-453c-96c5-2fdff76cde0c\") " Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.626880 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a11ff884-01ca-453c-96c5-2fdff76cde0c-utilities" (OuterVolumeSpecName: "utilities") pod "a11ff884-01ca-453c-96c5-2fdff76cde0c" (UID: "a11ff884-01ca-453c-96c5-2fdff76cde0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.628373 4768 scope.go:117] "RemoveContainer" containerID="9e0338cef2945aa4ecfcd5ff48825a77fffbe64fbde83d5b075cf4ca2214912e" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.633783 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11ff884-01ca-453c-96c5-2fdff76cde0c-kube-api-access-sh768" (OuterVolumeSpecName: "kube-api-access-sh768") pod "a11ff884-01ca-453c-96c5-2fdff76cde0c" (UID: "a11ff884-01ca-453c-96c5-2fdff76cde0c"). InnerVolumeSpecName "kube-api-access-sh768". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.697102 4768 scope.go:117] "RemoveContainer" containerID="19b7f7f94da0454102155d81a4f0d3f9e083ad40ec106ef417b4f80ab5000603" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.725470 4768 scope.go:117] "RemoveContainer" containerID="7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b" Nov 24 18:43:17 crc kubenswrapper[4768]: E1124 18:43:17.726465 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b\": container with ID starting with 7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b not found: ID does not exist" containerID="7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.726570 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b"} err="failed to get container status \"7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b\": rpc error: code = NotFound desc = could not find container \"7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b\": container with ID starting with 7d43e4d72158ec91bfa4dcfe3a1c6b4295ee1b8bc7f17e554bdb4155eebbdb0b not found: ID does not exist" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.726609 4768 scope.go:117] "RemoveContainer" containerID="9e0338cef2945aa4ecfcd5ff48825a77fffbe64fbde83d5b075cf4ca2214912e" Nov 24 18:43:17 crc kubenswrapper[4768]: E1124 18:43:17.727088 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e0338cef2945aa4ecfcd5ff48825a77fffbe64fbde83d5b075cf4ca2214912e\": container with ID starting with 9e0338cef2945aa4ecfcd5ff48825a77fffbe64fbde83d5b075cf4ca2214912e not found: ID does not exist" containerID="9e0338cef2945aa4ecfcd5ff48825a77fffbe64fbde83d5b075cf4ca2214912e" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.727133 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e0338cef2945aa4ecfcd5ff48825a77fffbe64fbde83d5b075cf4ca2214912e"} err="failed to get container status \"9e0338cef2945aa4ecfcd5ff48825a77fffbe64fbde83d5b075cf4ca2214912e\": rpc error: code = NotFound desc = could not find container \"9e0338cef2945aa4ecfcd5ff48825a77fffbe64fbde83d5b075cf4ca2214912e\": container with ID starting with 9e0338cef2945aa4ecfcd5ff48825a77fffbe64fbde83d5b075cf4ca2214912e not found: ID does not exist" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.727161 4768 scope.go:117] "RemoveContainer" containerID="19b7f7f94da0454102155d81a4f0d3f9e083ad40ec106ef417b4f80ab5000603" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.727739 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh768\" (UniqueName: \"kubernetes.io/projected/a11ff884-01ca-453c-96c5-2fdff76cde0c-kube-api-access-sh768\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:17 crc kubenswrapper[4768]: E1124 18:43:17.727739 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19b7f7f94da0454102155d81a4f0d3f9e083ad40ec106ef417b4f80ab5000603\": container with ID starting with 19b7f7f94da0454102155d81a4f0d3f9e083ad40ec106ef417b4f80ab5000603 not found: ID does not exist" containerID="19b7f7f94da0454102155d81a4f0d3f9e083ad40ec106ef417b4f80ab5000603" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.727768 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11ff884-01ca-453c-96c5-2fdff76cde0c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.727788 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19b7f7f94da0454102155d81a4f0d3f9e083ad40ec106ef417b4f80ab5000603"} err="failed to get container status \"19b7f7f94da0454102155d81a4f0d3f9e083ad40ec106ef417b4f80ab5000603\": rpc error: code = NotFound desc = could not find container \"19b7f7f94da0454102155d81a4f0d3f9e083ad40ec106ef417b4f80ab5000603\": container with ID starting with 19b7f7f94da0454102155d81a4f0d3f9e083ad40ec106ef417b4f80ab5000603 not found: ID does not exist" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.744846 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a11ff884-01ca-453c-96c5-2fdff76cde0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a11ff884-01ca-453c-96c5-2fdff76cde0c" (UID: "a11ff884-01ca-453c-96c5-2fdff76cde0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.829938 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11ff884-01ca-453c-96c5-2fdff76cde0c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.961754 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pn2ls"] Nov 24 18:43:17 crc kubenswrapper[4768]: I1124 18:43:17.976078 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pn2ls"] Nov 24 18:43:19 crc kubenswrapper[4768]: I1124 18:43:19.900049 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:43:19 crc kubenswrapper[4768]: E1124 18:43:19.900934 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:43:19 crc kubenswrapper[4768]: I1124 18:43:19.912728 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a11ff884-01ca-453c-96c5-2fdff76cde0c" path="/var/lib/kubelet/pods/a11ff884-01ca-453c-96c5-2fdff76cde0c/volumes" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.534860 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 24 18:43:23 crc kubenswrapper[4768]: E1124 18:43:23.535653 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11ff884-01ca-453c-96c5-2fdff76cde0c" containerName="extract-content" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.535667 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11ff884-01ca-453c-96c5-2fdff76cde0c" containerName="extract-content" Nov 24 18:43:23 crc kubenswrapper[4768]: E1124 18:43:23.535687 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.535694 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 24 18:43:23 crc kubenswrapper[4768]: E1124 18:43:23.535705 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11ff884-01ca-453c-96c5-2fdff76cde0c" containerName="extract-utilities" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.535712 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11ff884-01ca-453c-96c5-2fdff76cde0c" containerName="extract-utilities" Nov 24 18:43:23 crc kubenswrapper[4768]: E1124 18:43:23.535721 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11ff884-01ca-453c-96c5-2fdff76cde0c" containerName="registry-server" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.535726 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11ff884-01ca-453c-96c5-2fdff76cde0c" containerName="registry-server" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.535907 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd99c2dc-4b0c-49e8-bc2e-59a8ad923066" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.535925 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a11ff884-01ca-453c-96c5-2fdff76cde0c" containerName="registry-server" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.536866 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.541209 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.541777 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.547673 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.549525 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.551434 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.565105 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.581325 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.643963 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644003 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644026 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644086 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-sys\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644105 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lnv5\" (UniqueName: \"kubernetes.io/projected/97567296-4a8c-4270-96b4-83eaabf8194b-kube-api-access-2lnv5\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644145 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-scripts\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644174 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644197 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644210 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-sys\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644227 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/97567296-4a8c-4270-96b4-83eaabf8194b-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644314 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644353 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644388 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644424 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp28q\" (UniqueName: \"kubernetes.io/projected/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-kube-api-access-hp28q\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644449 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97567296-4a8c-4270-96b4-83eaabf8194b-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644559 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644604 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-ceph\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644661 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-lib-modules\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644681 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644698 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644714 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-run\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644736 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97567296-4a8c-4270-96b4-83eaabf8194b-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644756 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644833 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-config-data\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644872 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644927 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-run\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.644973 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.645060 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.645087 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97567296-4a8c-4270-96b4-83eaabf8194b-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.645108 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97567296-4a8c-4270-96b4-83eaabf8194b-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.645141 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-dev\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.645169 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-dev\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.746805 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-config-data\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.746844 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.746870 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-run\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.746897 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.746933 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.746953 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97567296-4a8c-4270-96b4-83eaabf8194b-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.746969 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97567296-4a8c-4270-96b4-83eaabf8194b-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747657 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-dev\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747033 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747707 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-dev\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747196 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747054 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-run\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747757 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-dev\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747762 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-dev\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747117 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747821 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747852 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747893 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747912 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-sys\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747934 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747952 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lnv5\" (UniqueName: \"kubernetes.io/projected/97567296-4a8c-4270-96b4-83eaabf8194b-kube-api-access-2lnv5\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.747976 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-scripts\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748006 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748069 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748131 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748163 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-sys\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748196 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/97567296-4a8c-4270-96b4-83eaabf8194b-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748245 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748279 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748303 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748341 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp28q\" (UniqueName: \"kubernetes.io/projected/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-kube-api-access-hp28q\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748392 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-sys\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748365 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97567296-4a8c-4270-96b4-83eaabf8194b-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748523 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748568 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-ceph\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748636 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-lib-modules\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748657 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748683 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748699 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-run\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748721 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97567296-4a8c-4270-96b4-83eaabf8194b-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748742 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.748978 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.749071 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.749084 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.749141 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.749175 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.749221 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.749286 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.749287 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-sys\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.749334 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-run\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.749360 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/97567296-4a8c-4270-96b4-83eaabf8194b-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.749371 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-lib-modules\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.753530 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97567296-4a8c-4270-96b4-83eaabf8194b-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.753831 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-config-data\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.755552 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-ceph\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.755994 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97567296-4a8c-4270-96b4-83eaabf8194b-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.756016 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97567296-4a8c-4270-96b4-83eaabf8194b-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.756985 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/97567296-4a8c-4270-96b4-83eaabf8194b-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.758052 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.764183 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97567296-4a8c-4270-96b4-83eaabf8194b-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.764191 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-scripts\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.764188 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.765205 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp28q\" (UniqueName: \"kubernetes.io/projected/9d187717-3b2d-42c1-9daa-6db0b5d2c14c-kube-api-access-hp28q\") pod \"cinder-backup-0\" (UID: \"9d187717-3b2d-42c1-9daa-6db0b5d2c14c\") " pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.768782 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lnv5\" (UniqueName: \"kubernetes.io/projected/97567296-4a8c-4270-96b4-83eaabf8194b-kube-api-access-2lnv5\") pod \"cinder-volume-volume1-0\" (UID: \"97567296-4a8c-4270-96b4-83eaabf8194b\") " pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.853010 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 24 18:43:23 crc kubenswrapper[4768]: I1124 18:43:23.874429 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.043550 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-677bdf55b9-f4t6m"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.045341 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.049186 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.049295 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.049938 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-nbc5m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.049978 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.062831 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-677bdf55b9-f4t6m"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.139550 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-7pw6k"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.141047 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-7pw6k" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.169649 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-d357-account-create-jbk6f"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.170868 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d357-account-create-jbk6f" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.174625 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d596t\" (UniqueName: \"kubernetes.io/projected/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-kube-api-access-d596t\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.174678 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-config-data\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.174748 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-horizon-secret-key\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.174781 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-scripts\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.174823 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-logs\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.175022 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.175178 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-7pw6k"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.207937 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d357-account-create-jbk6f"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.234711 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5cd66787c-cg7lk"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.237326 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.251465 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cd66787c-cg7lk"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.282652 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.288375 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.292184 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-t2kxl" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.292218 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.292358 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.292416 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.306021 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d596t\" (UniqueName: \"kubernetes.io/projected/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-kube-api-access-d596t\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.315592 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-config-data\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.314004 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-config-data\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.317084 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ebe595-8584-4ad6-a043-b2df4d7cef79-operator-scripts\") pod \"manila-db-create-7pw6k\" (UID: \"60ebe595-8584-4ad6-a043-b2df4d7cef79\") " pod="openstack/manila-db-create-7pw6k" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.317179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-horizon-secret-key\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.317363 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-scripts\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.326543 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-scripts\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.327378 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9eb5f31-ed6d-43b8-920a-9d6767e66382-operator-scripts\") pod \"manila-d357-account-create-jbk6f\" (UID: \"f9eb5f31-ed6d-43b8-920a-9d6767e66382\") " pod="openstack/manila-d357-account-create-jbk6f" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.328477 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-logs\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.328588 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbddx\" (UniqueName: \"kubernetes.io/projected/60ebe595-8584-4ad6-a043-b2df4d7cef79-kube-api-access-kbddx\") pod \"manila-db-create-7pw6k\" (UID: \"60ebe595-8584-4ad6-a043-b2df4d7cef79\") " pod="openstack/manila-db-create-7pw6k" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.328703 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmjz4\" (UniqueName: \"kubernetes.io/projected/f9eb5f31-ed6d-43b8-920a-9d6767e66382-kube-api-access-pmjz4\") pod \"manila-d357-account-create-jbk6f\" (UID: \"f9eb5f31-ed6d-43b8-920a-9d6767e66382\") " pod="openstack/manila-d357-account-create-jbk6f" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.328972 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-logs\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.331234 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-horizon-secret-key\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.333768 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.344998 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d596t\" (UniqueName: \"kubernetes.io/projected/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-kube-api-access-d596t\") pod \"horizon-677bdf55b9-f4t6m\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.360226 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.363347 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.367261 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.367751 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.375933 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.379570 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.431609 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2lzs\" (UniqueName: \"kubernetes.io/projected/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-kube-api-access-d2lzs\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.431658 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.432149 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.432174 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-horizon-secret-key\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.432225 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9eb5f31-ed6d-43b8-920a-9d6767e66382-operator-scripts\") pod \"manila-d357-account-create-jbk6f\" (UID: \"f9eb5f31-ed6d-43b8-920a-9d6767e66382\") " pod="openstack/manila-d357-account-create-jbk6f" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.432268 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92bac162-5546-4be5-a204-7c04581f7d1b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.432288 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/92bac162-5546-4be5-a204-7c04581f7d1b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.432321 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbddx\" (UniqueName: \"kubernetes.io/projected/60ebe595-8584-4ad6-a043-b2df4d7cef79-kube-api-access-kbddx\") pod \"manila-db-create-7pw6k\" (UID: \"60ebe595-8584-4ad6-a043-b2df4d7cef79\") " pod="openstack/manila-db-create-7pw6k" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.432342 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkv2f\" (UniqueName: \"kubernetes.io/projected/92bac162-5546-4be5-a204-7c04581f7d1b-kube-api-access-kkv2f\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.433220 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.433243 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bac162-5546-4be5-a204-7c04581f7d1b-logs\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.433267 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmjz4\" (UniqueName: \"kubernetes.io/projected/f9eb5f31-ed6d-43b8-920a-9d6767e66382-kube-api-access-pmjz4\") pod \"manila-d357-account-create-jbk6f\" (UID: \"f9eb5f31-ed6d-43b8-920a-9d6767e66382\") " pod="openstack/manila-d357-account-create-jbk6f" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.433289 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-config-data\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.433319 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.433359 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.433385 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-logs\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.433448 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-scripts\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.433501 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ebe595-8584-4ad6-a043-b2df4d7cef79-operator-scripts\") pod \"manila-db-create-7pw6k\" (UID: \"60ebe595-8584-4ad6-a043-b2df4d7cef79\") " pod="openstack/manila-db-create-7pw6k" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.433152 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9eb5f31-ed6d-43b8-920a-9d6767e66382-operator-scripts\") pod \"manila-d357-account-create-jbk6f\" (UID: \"f9eb5f31-ed6d-43b8-920a-9d6767e66382\") " pod="openstack/manila-d357-account-create-jbk6f" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.434793 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ebe595-8584-4ad6-a043-b2df4d7cef79-operator-scripts\") pod \"manila-db-create-7pw6k\" (UID: \"60ebe595-8584-4ad6-a043-b2df4d7cef79\") " pod="openstack/manila-db-create-7pw6k" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.449437 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbddx\" (UniqueName: \"kubernetes.io/projected/60ebe595-8584-4ad6-a043-b2df4d7cef79-kube-api-access-kbddx\") pod \"manila-db-create-7pw6k\" (UID: \"60ebe595-8584-4ad6-a043-b2df4d7cef79\") " pod="openstack/manila-db-create-7pw6k" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.449827 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmjz4\" (UniqueName: \"kubernetes.io/projected/f9eb5f31-ed6d-43b8-920a-9d6767e66382-kube-api-access-pmjz4\") pod \"manila-d357-account-create-jbk6f\" (UID: \"f9eb5f31-ed6d-43b8-920a-9d6767e66382\") " pod="openstack/manila-d357-account-create-jbk6f" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.484100 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-7pw6k" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.510227 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d357-account-create-jbk6f" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535377 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-config-data\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535431 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-config-data\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535450 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535465 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/974510b5-1952-4c05-b1af-ffade25e7787-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535509 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535732 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535763 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-logs\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535787 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5b2b\" (UniqueName: \"kubernetes.io/projected/974510b5-1952-4c05-b1af-ffade25e7787-kube-api-access-v5b2b\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535824 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-scripts\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535858 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2lzs\" (UniqueName: \"kubernetes.io/projected/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-kube-api-access-d2lzs\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535881 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535901 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535916 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.535996 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-horizon-secret-key\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.536033 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.540917 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-logs\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.541140 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-config-data\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.541342 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.541691 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-scripts\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.546434 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-horizon-secret-key\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.548194 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.550878 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.551559 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/974510b5-1952-4c05-b1af-ffade25e7787-ceph\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.551672 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92bac162-5546-4be5-a204-7c04581f7d1b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.551705 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/92bac162-5546-4be5-a204-7c04581f7d1b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.551763 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-scripts\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.551790 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkv2f\" (UniqueName: \"kubernetes.io/projected/92bac162-5546-4be5-a204-7c04581f7d1b-kube-api-access-kkv2f\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.551842 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/974510b5-1952-4c05-b1af-ffade25e7787-logs\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.551875 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.551906 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bac162-5546-4be5-a204-7c04581f7d1b-logs\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.552253 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92bac162-5546-4be5-a204-7c04581f7d1b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.552322 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bac162-5546-4be5-a204-7c04581f7d1b-logs\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.554036 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.555156 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.558964 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/92bac162-5546-4be5-a204-7c04581f7d1b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.559588 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.560005 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2lzs\" (UniqueName: \"kubernetes.io/projected/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-kube-api-access-d2lzs\") pod \"horizon-5cd66787c-cg7lk\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: W1124 18:43:24.570401 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d187717_3b2d_42c1_9daa_6db0b5d2c14c.slice/crio-5c6bf4d26d41a8423321d975e4c57b8204e3834cf4d84e1498229875581d406a WatchSource:0}: Error finding container 5c6bf4d26d41a8423321d975e4c57b8204e3834cf4d84e1498229875581d406a: Status 404 returned error can't find the container with id 5c6bf4d26d41a8423321d975e4c57b8204e3834cf4d84e1498229875581d406a Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.575952 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkv2f\" (UniqueName: \"kubernetes.io/projected/92bac162-5546-4be5-a204-7c04581f7d1b-kube-api-access-kkv2f\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.589198 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.628610 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.653530 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.653585 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.653612 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/974510b5-1952-4c05-b1af-ffade25e7787-ceph\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.653664 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-scripts\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.653718 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/974510b5-1952-4c05-b1af-ffade25e7787-logs\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.653732 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.653940 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-config-data\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.653962 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/974510b5-1952-4c05-b1af-ffade25e7787-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.653979 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.654015 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5b2b\" (UniqueName: \"kubernetes.io/projected/974510b5-1952-4c05-b1af-ffade25e7787-kube-api-access-v5b2b\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.655345 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/974510b5-1952-4c05-b1af-ffade25e7787-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.655814 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/974510b5-1952-4c05-b1af-ffade25e7787-logs\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.659105 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/974510b5-1952-4c05-b1af-ffade25e7787-ceph\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.662943 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.664157 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.664430 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-scripts\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.666266 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-config-data\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.675777 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5b2b\" (UniqueName: \"kubernetes.io/projected/974510b5-1952-4c05-b1af-ffade25e7787-kube-api-access-v5b2b\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.693334 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9d187717-3b2d-42c1-9daa-6db0b5d2c14c","Type":"ContainerStarted","Data":"5c6bf4d26d41a8423321d975e4c57b8204e3834cf4d84e1498229875581d406a"} Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.693358 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.698279 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.699034 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 24 18:43:24 crc kubenswrapper[4768]: W1124 18:43:24.703634 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97567296_4a8c_4270_96b4_83eaabf8194b.slice/crio-6e83f67bcb040c89ae8ba4554562d78c36b635f848479168d7c924eeaec91b04 WatchSource:0}: Error finding container 6e83f67bcb040c89ae8ba4554562d78c36b635f848479168d7c924eeaec91b04: Status 404 returned error can't find the container with id 6e83f67bcb040c89ae8ba4554562d78c36b635f848479168d7c924eeaec91b04 Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.834444 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-677bdf55b9-f4t6m"] Nov 24 18:43:24 crc kubenswrapper[4768]: W1124 18:43:24.836812 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e7d6092_94ac_4b23_9cfa_c3ede78e1dbd.slice/crio-9c7df066de523da8a06a68e26a7cef2d34029cbab4d7909a10f303bd192d9549 WatchSource:0}: Error finding container 9c7df066de523da8a06a68e26a7cef2d34029cbab4d7909a10f303bd192d9549: Status 404 returned error can't find the container with id 9c7df066de523da8a06a68e26a7cef2d34029cbab4d7909a10f303bd192d9549 Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.854381 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:24 crc kubenswrapper[4768]: I1124 18:43:24.977014 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d357-account-create-jbk6f"] Nov 24 18:43:25 crc kubenswrapper[4768]: W1124 18:43:25.001072 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9eb5f31_ed6d_43b8_920a_9d6767e66382.slice/crio-c3a0dd26111d55b2f3a0efac771c44447f79b5cbd558328f5f8e71bcc9656c4a WatchSource:0}: Error finding container c3a0dd26111d55b2f3a0efac771c44447f79b5cbd558328f5f8e71bcc9656c4a: Status 404 returned error can't find the container with id c3a0dd26111d55b2f3a0efac771c44447f79b5cbd558328f5f8e71bcc9656c4a Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.047844 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-7pw6k"] Nov 24 18:43:25 crc kubenswrapper[4768]: W1124 18:43:25.048330 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60ebe595_8584_4ad6_a043_b2df4d7cef79.slice/crio-3cb50ab7760a1d22dca322a2cdcfe93333af2b1e5c4474d6ec02b66bcc18a4c7 WatchSource:0}: Error finding container 3cb50ab7760a1d22dca322a2cdcfe93333af2b1e5c4474d6ec02b66bcc18a4c7: Status 404 returned error can't find the container with id 3cb50ab7760a1d22dca322a2cdcfe93333af2b1e5c4474d6ec02b66bcc18a4c7 Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.238200 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.343438 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cd66787c-cg7lk"] Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.351017 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 18:43:25 crc kubenswrapper[4768]: W1124 18:43:25.426654 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92bac162_5546_4be5_a204_7c04581f7d1b.slice/crio-7fdf93f0de3a8337c169d67f106d4a98bebdc3032934133794699b76a865e6fd WatchSource:0}: Error finding container 7fdf93f0de3a8337c169d67f106d4a98bebdc3032934133794699b76a865e6fd: Status 404 returned error can't find the container with id 7fdf93f0de3a8337c169d67f106d4a98bebdc3032934133794699b76a865e6fd Nov 24 18:43:25 crc kubenswrapper[4768]: W1124 18:43:25.429070 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd19ff26_97cb_4d1e_a9ae_ecd4867ada76.slice/crio-5b8b8c989663fb09c433989c99a1afb3bf185c7a795944311a257c21334ea26e WatchSource:0}: Error finding container 5b8b8c989663fb09c433989c99a1afb3bf185c7a795944311a257c21334ea26e: Status 404 returned error can't find the container with id 5b8b8c989663fb09c433989c99a1afb3bf185c7a795944311a257c21334ea26e Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.822233 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-677bdf55b9-f4t6m" event={"ID":"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd","Type":"ContainerStarted","Data":"9c7df066de523da8a06a68e26a7cef2d34029cbab4d7909a10f303bd192d9549"} Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.824606 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92bac162-5546-4be5-a204-7c04581f7d1b","Type":"ContainerStarted","Data":"7fdf93f0de3a8337c169d67f106d4a98bebdc3032934133794699b76a865e6fd"} Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.830400 4768 generic.go:334] "Generic (PLEG): container finished" podID="60ebe595-8584-4ad6-a043-b2df4d7cef79" containerID="c47be35bbb70f8880f79dc4121a58457bb7968fcc1324bd4efd66903fc4868e2" exitCode=0 Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.830893 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-7pw6k" event={"ID":"60ebe595-8584-4ad6-a043-b2df4d7cef79","Type":"ContainerDied","Data":"c47be35bbb70f8880f79dc4121a58457bb7968fcc1324bd4efd66903fc4868e2"} Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.830941 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-7pw6k" event={"ID":"60ebe595-8584-4ad6-a043-b2df4d7cef79","Type":"ContainerStarted","Data":"3cb50ab7760a1d22dca322a2cdcfe93333af2b1e5c4474d6ec02b66bcc18a4c7"} Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.832609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cd66787c-cg7lk" event={"ID":"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76","Type":"ContainerStarted","Data":"5b8b8c989663fb09c433989c99a1afb3bf185c7a795944311a257c21334ea26e"} Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.834555 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"974510b5-1952-4c05-b1af-ffade25e7787","Type":"ContainerStarted","Data":"8cd1d3a45a9fe1d2dd4ebbe0afe72b6c5b5cf516aed3d4749a43525a6edccd4a"} Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.836286 4768 generic.go:334] "Generic (PLEG): container finished" podID="f9eb5f31-ed6d-43b8-920a-9d6767e66382" containerID="75c6edd6b3fbbd225044235f3b2a32887b2be8dd715f28ab67da0fdc9b6995f8" exitCode=0 Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.836398 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d357-account-create-jbk6f" event={"ID":"f9eb5f31-ed6d-43b8-920a-9d6767e66382","Type":"ContainerDied","Data":"75c6edd6b3fbbd225044235f3b2a32887b2be8dd715f28ab67da0fdc9b6995f8"} Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.836418 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d357-account-create-jbk6f" event={"ID":"f9eb5f31-ed6d-43b8-920a-9d6767e66382","Type":"ContainerStarted","Data":"c3a0dd26111d55b2f3a0efac771c44447f79b5cbd558328f5f8e71bcc9656c4a"} Nov 24 18:43:25 crc kubenswrapper[4768]: I1124 18:43:25.840065 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"97567296-4a8c-4270-96b4-83eaabf8194b","Type":"ContainerStarted","Data":"6e83f67bcb040c89ae8ba4554562d78c36b635f848479168d7c924eeaec91b04"} Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.792041 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-677bdf55b9-f4t6m"] Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.832272 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-685ddbdf68-6mjzl"] Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.837339 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.840138 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.865463 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.903239 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92bac162-5546-4be5-a204-7c04581f7d1b","Type":"ContainerStarted","Data":"c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726"} Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.903908 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-685ddbdf68-6mjzl"] Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.910373 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9d187717-3b2d-42c1-9daa-6db0b5d2c14c","Type":"ContainerStarted","Data":"b754a45ba3b9e870201036878bfc951c9de52e44c830bda2144d66c1c8e0e72c"} Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.911245 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9d187717-3b2d-42c1-9daa-6db0b5d2c14c","Type":"ContainerStarted","Data":"5bec68ca4707669666c814cb2205bbdf481f2ad0915d5e59aa406ece826e71b5"} Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.929690 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpfns\" (UniqueName: \"kubernetes.io/projected/375f8ae8-797c-40c7-bd90-93b3538ff9aa-kube-api-access-kpfns\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.930591 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-horizon-tls-certs\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.930695 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/375f8ae8-797c-40c7-bd90-93b3538ff9aa-logs\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.930766 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-combined-ca-bundle\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.930879 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/375f8ae8-797c-40c7-bd90-93b3538ff9aa-scripts\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.931067 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/375f8ae8-797c-40c7-bd90-93b3538ff9aa-config-data\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.931145 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-horizon-secret-key\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.940681 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"974510b5-1952-4c05-b1af-ffade25e7787","Type":"ContainerStarted","Data":"0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1"} Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.959205 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"97567296-4a8c-4270-96b4-83eaabf8194b","Type":"ContainerStarted","Data":"a33dfef92294e68d0403283c486352659393b1a6c51789f9f7b63032d5db57e6"} Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.959245 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"97567296-4a8c-4270-96b4-83eaabf8194b","Type":"ContainerStarted","Data":"3baea271e2651848cae126fbbdfb13d8f0ab866033d1f2eca755b89331494cd1"} Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.969100 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cd66787c-cg7lk"] Nov 24 18:43:26 crc kubenswrapper[4768]: I1124 18:43:26.987360 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.008052 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-85f468447b-zhvc8"] Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.009730 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.011087 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.122463207 podStartE2EDuration="4.01107562s" podCreationTimestamp="2025-11-24 18:43:23 +0000 UTC" firstStartedPulling="2025-11-24 18:43:24.581691094 +0000 UTC m=+3243.442272871" lastFinishedPulling="2025-11-24 18:43:25.470303507 +0000 UTC m=+3244.330885284" observedRunningTime="2025-11-24 18:43:26.939990414 +0000 UTC m=+3245.800572191" watchObservedRunningTime="2025-11-24 18:43:27.01107562 +0000 UTC m=+3245.871657387" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033233 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/375f8ae8-797c-40c7-bd90-93b3538ff9aa-scripts\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033278 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdr7\" (UniqueName: \"kubernetes.io/projected/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-kube-api-access-7zdr7\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033411 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/375f8ae8-797c-40c7-bd90-93b3538ff9aa-config-data\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033434 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-horizon-secret-key\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033454 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-horizon-tls-certs\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033495 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-logs\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033519 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-scripts\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033573 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-combined-ca-bundle\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033594 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-config-data\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033613 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpfns\" (UniqueName: \"kubernetes.io/projected/375f8ae8-797c-40c7-bd90-93b3538ff9aa-kube-api-access-kpfns\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033658 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-horizon-tls-certs\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033676 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-horizon-secret-key\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033692 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/375f8ae8-797c-40c7-bd90-93b3538ff9aa-logs\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.033714 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-combined-ca-bundle\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.034550 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/375f8ae8-797c-40c7-bd90-93b3538ff9aa-scripts\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.036047 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/375f8ae8-797c-40c7-bd90-93b3538ff9aa-config-data\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.036276 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/375f8ae8-797c-40c7-bd90-93b3538ff9aa-logs\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.038456 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85f468447b-zhvc8"] Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.046170 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-horizon-secret-key\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.050746 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.292763827 podStartE2EDuration="4.050727369s" podCreationTimestamp="2025-11-24 18:43:23 +0000 UTC" firstStartedPulling="2025-11-24 18:43:24.712324015 +0000 UTC m=+3243.572905792" lastFinishedPulling="2025-11-24 18:43:25.470287547 +0000 UTC m=+3244.330869334" observedRunningTime="2025-11-24 18:43:26.993085714 +0000 UTC m=+3245.853667491" watchObservedRunningTime="2025-11-24 18:43:27.050727369 +0000 UTC m=+3245.911309146" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.052261 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-combined-ca-bundle\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.054599 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-horizon-tls-certs\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.057057 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpfns\" (UniqueName: \"kubernetes.io/projected/375f8ae8-797c-40c7-bd90-93b3538ff9aa-kube-api-access-kpfns\") pod \"horizon-685ddbdf68-6mjzl\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.151098 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-horizon-tls-certs\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.151203 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-logs\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.151255 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-scripts\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.151331 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-combined-ca-bundle\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.151360 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-config-data\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.151433 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-horizon-secret-key\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.151570 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdr7\" (UniqueName: \"kubernetes.io/projected/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-kube-api-access-7zdr7\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.154052 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-logs\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.154908 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-scripts\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.156379 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-config-data\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.167620 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdr7\" (UniqueName: \"kubernetes.io/projected/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-kube-api-access-7zdr7\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.168850 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-horizon-tls-certs\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.169410 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-horizon-secret-key\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.169995 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274-combined-ca-bundle\") pod \"horizon-85f468447b-zhvc8\" (UID: \"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274\") " pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.189891 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.340599 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.424752 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-7pw6k" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.433196 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d357-account-create-jbk6f" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.470144 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbddx\" (UniqueName: \"kubernetes.io/projected/60ebe595-8584-4ad6-a043-b2df4d7cef79-kube-api-access-kbddx\") pod \"60ebe595-8584-4ad6-a043-b2df4d7cef79\" (UID: \"60ebe595-8584-4ad6-a043-b2df4d7cef79\") " Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.470258 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ebe595-8584-4ad6-a043-b2df4d7cef79-operator-scripts\") pod \"60ebe595-8584-4ad6-a043-b2df4d7cef79\" (UID: \"60ebe595-8584-4ad6-a043-b2df4d7cef79\") " Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.471469 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ebe595-8584-4ad6-a043-b2df4d7cef79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60ebe595-8584-4ad6-a043-b2df4d7cef79" (UID: "60ebe595-8584-4ad6-a043-b2df4d7cef79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.485683 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ebe595-8584-4ad6-a043-b2df4d7cef79-kube-api-access-kbddx" (OuterVolumeSpecName: "kube-api-access-kbddx") pod "60ebe595-8584-4ad6-a043-b2df4d7cef79" (UID: "60ebe595-8584-4ad6-a043-b2df4d7cef79"). InnerVolumeSpecName "kube-api-access-kbddx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.571983 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9eb5f31-ed6d-43b8-920a-9d6767e66382-operator-scripts\") pod \"f9eb5f31-ed6d-43b8-920a-9d6767e66382\" (UID: \"f9eb5f31-ed6d-43b8-920a-9d6767e66382\") " Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.572237 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmjz4\" (UniqueName: \"kubernetes.io/projected/f9eb5f31-ed6d-43b8-920a-9d6767e66382-kube-api-access-pmjz4\") pod \"f9eb5f31-ed6d-43b8-920a-9d6767e66382\" (UID: \"f9eb5f31-ed6d-43b8-920a-9d6767e66382\") " Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.572399 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9eb5f31-ed6d-43b8-920a-9d6767e66382-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9eb5f31-ed6d-43b8-920a-9d6767e66382" (UID: "f9eb5f31-ed6d-43b8-920a-9d6767e66382"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.573081 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbddx\" (UniqueName: \"kubernetes.io/projected/60ebe595-8584-4ad6-a043-b2df4d7cef79-kube-api-access-kbddx\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.573103 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ebe595-8584-4ad6-a043-b2df4d7cef79-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.573135 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9eb5f31-ed6d-43b8-920a-9d6767e66382-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.581677 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9eb5f31-ed6d-43b8-920a-9d6767e66382-kube-api-access-pmjz4" (OuterVolumeSpecName: "kube-api-access-pmjz4") pod "f9eb5f31-ed6d-43b8-920a-9d6767e66382" (UID: "f9eb5f31-ed6d-43b8-920a-9d6767e66382"). InnerVolumeSpecName "kube-api-access-pmjz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.675566 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmjz4\" (UniqueName: \"kubernetes.io/projected/f9eb5f31-ed6d-43b8-920a-9d6767e66382-kube-api-access-pmjz4\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.808415 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-685ddbdf68-6mjzl"] Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.978324 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85f468447b-zhvc8"] Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.979575 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-685ddbdf68-6mjzl" event={"ID":"375f8ae8-797c-40c7-bd90-93b3538ff9aa","Type":"ContainerStarted","Data":"ee2d7b28768803186431aa229adda2f0e45f3372409cccdbcde283cddad9ea28"} Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.986661 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="974510b5-1952-4c05-b1af-ffade25e7787" containerName="glance-log" containerID="cri-o://0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1" gracePeriod=30 Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.986932 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"974510b5-1952-4c05-b1af-ffade25e7787","Type":"ContainerStarted","Data":"f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5"} Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.987103 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="974510b5-1952-4c05-b1af-ffade25e7787" containerName="glance-httpd" containerID="cri-o://f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5" gracePeriod=30 Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.999510 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d357-account-create-jbk6f" event={"ID":"f9eb5f31-ed6d-43b8-920a-9d6767e66382","Type":"ContainerDied","Data":"c3a0dd26111d55b2f3a0efac771c44447f79b5cbd558328f5f8e71bcc9656c4a"} Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.999545 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3a0dd26111d55b2f3a0efac771c44447f79b5cbd558328f5f8e71bcc9656c4a" Nov 24 18:43:27 crc kubenswrapper[4768]: I1124 18:43:27.999604 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d357-account-create-jbk6f" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.007153 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="92bac162-5546-4be5-a204-7c04581f7d1b" containerName="glance-log" containerID="cri-o://c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726" gracePeriod=30 Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.007217 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92bac162-5546-4be5-a204-7c04581f7d1b","Type":"ContainerStarted","Data":"6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d"} Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.007315 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="92bac162-5546-4be5-a204-7c04581f7d1b" containerName="glance-httpd" containerID="cri-o://6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d" gracePeriod=30 Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.014010 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-7pw6k" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.014061 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-7pw6k" event={"ID":"60ebe595-8584-4ad6-a043-b2df4d7cef79","Type":"ContainerDied","Data":"3cb50ab7760a1d22dca322a2cdcfe93333af2b1e5c4474d6ec02b66bcc18a4c7"} Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.014098 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cb50ab7760a1d22dca322a2cdcfe93333af2b1e5c4474d6ec02b66bcc18a4c7" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.029916 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.029884473 podStartE2EDuration="4.029884473s" podCreationTimestamp="2025-11-24 18:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:43:28.021317842 +0000 UTC m=+3246.881899619" watchObservedRunningTime="2025-11-24 18:43:28.029884473 +0000 UTC m=+3246.890466250" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.058146 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.058122754 podStartE2EDuration="4.058122754s" podCreationTimestamp="2025-11-24 18:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:43:28.052246776 +0000 UTC m=+3246.912828553" watchObservedRunningTime="2025-11-24 18:43:28.058122754 +0000 UTC m=+3246.918704531" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.512305 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.599347 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/974510b5-1952-4c05-b1af-ffade25e7787-httpd-run\") pod \"974510b5-1952-4c05-b1af-ffade25e7787\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.599505 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-combined-ca-bundle\") pod \"974510b5-1952-4c05-b1af-ffade25e7787\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.599538 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-public-tls-certs\") pod \"974510b5-1952-4c05-b1af-ffade25e7787\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.599641 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/974510b5-1952-4c05-b1af-ffade25e7787-ceph\") pod \"974510b5-1952-4c05-b1af-ffade25e7787\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.599691 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5b2b\" (UniqueName: \"kubernetes.io/projected/974510b5-1952-4c05-b1af-ffade25e7787-kube-api-access-v5b2b\") pod \"974510b5-1952-4c05-b1af-ffade25e7787\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.599773 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-config-data\") pod \"974510b5-1952-4c05-b1af-ffade25e7787\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.599852 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-scripts\") pod \"974510b5-1952-4c05-b1af-ffade25e7787\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.600037 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/974510b5-1952-4c05-b1af-ffade25e7787-logs\") pod \"974510b5-1952-4c05-b1af-ffade25e7787\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.600063 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"974510b5-1952-4c05-b1af-ffade25e7787\" (UID: \"974510b5-1952-4c05-b1af-ffade25e7787\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.601378 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/974510b5-1952-4c05-b1af-ffade25e7787-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "974510b5-1952-4c05-b1af-ffade25e7787" (UID: "974510b5-1952-4c05-b1af-ffade25e7787"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.605274 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/974510b5-1952-4c05-b1af-ffade25e7787-logs" (OuterVolumeSpecName: "logs") pod "974510b5-1952-4c05-b1af-ffade25e7787" (UID: "974510b5-1952-4c05-b1af-ffade25e7787"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.606540 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "974510b5-1952-4c05-b1af-ffade25e7787" (UID: "974510b5-1952-4c05-b1af-ffade25e7787"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.606687 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/974510b5-1952-4c05-b1af-ffade25e7787-ceph" (OuterVolumeSpecName: "ceph") pod "974510b5-1952-4c05-b1af-ffade25e7787" (UID: "974510b5-1952-4c05-b1af-ffade25e7787"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.606730 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-scripts" (OuterVolumeSpecName: "scripts") pod "974510b5-1952-4c05-b1af-ffade25e7787" (UID: "974510b5-1952-4c05-b1af-ffade25e7787"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.612691 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/974510b5-1952-4c05-b1af-ffade25e7787-kube-api-access-v5b2b" (OuterVolumeSpecName: "kube-api-access-v5b2b") pod "974510b5-1952-4c05-b1af-ffade25e7787" (UID: "974510b5-1952-4c05-b1af-ffade25e7787"). InnerVolumeSpecName "kube-api-access-v5b2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.626523 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "974510b5-1952-4c05-b1af-ffade25e7787" (UID: "974510b5-1952-4c05-b1af-ffade25e7787"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.665837 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-config-data" (OuterVolumeSpecName: "config-data") pod "974510b5-1952-4c05-b1af-ffade25e7787" (UID: "974510b5-1952-4c05-b1af-ffade25e7787"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.678507 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "974510b5-1952-4c05-b1af-ffade25e7787" (UID: "974510b5-1952-4c05-b1af-ffade25e7787"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.686118 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.702610 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/974510b5-1952-4c05-b1af-ffade25e7787-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.702639 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.702648 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.702660 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/974510b5-1952-4c05-b1af-ffade25e7787-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.702670 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5b2b\" (UniqueName: \"kubernetes.io/projected/974510b5-1952-4c05-b1af-ffade25e7787-kube-api-access-v5b2b\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.702678 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.702688 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/974510b5-1952-4c05-b1af-ffade25e7787-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.702695 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/974510b5-1952-4c05-b1af-ffade25e7787-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.702728 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.728183 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.803976 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-scripts\") pod \"92bac162-5546-4be5-a204-7c04581f7d1b\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.804045 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/92bac162-5546-4be5-a204-7c04581f7d1b-ceph\") pod \"92bac162-5546-4be5-a204-7c04581f7d1b\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.804216 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkv2f\" (UniqueName: \"kubernetes.io/projected/92bac162-5546-4be5-a204-7c04581f7d1b-kube-api-access-kkv2f\") pod \"92bac162-5546-4be5-a204-7c04581f7d1b\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.804273 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bac162-5546-4be5-a204-7c04581f7d1b-logs\") pod \"92bac162-5546-4be5-a204-7c04581f7d1b\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.804300 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-combined-ca-bundle\") pod \"92bac162-5546-4be5-a204-7c04581f7d1b\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.804323 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"92bac162-5546-4be5-a204-7c04581f7d1b\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.804342 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92bac162-5546-4be5-a204-7c04581f7d1b-httpd-run\") pod \"92bac162-5546-4be5-a204-7c04581f7d1b\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.804384 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-config-data\") pod \"92bac162-5546-4be5-a204-7c04581f7d1b\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.804426 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-internal-tls-certs\") pod \"92bac162-5546-4be5-a204-7c04581f7d1b\" (UID: \"92bac162-5546-4be5-a204-7c04581f7d1b\") " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.804871 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.805620 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92bac162-5546-4be5-a204-7c04581f7d1b-logs" (OuterVolumeSpecName: "logs") pod "92bac162-5546-4be5-a204-7c04581f7d1b" (UID: "92bac162-5546-4be5-a204-7c04581f7d1b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.805687 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92bac162-5546-4be5-a204-7c04581f7d1b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "92bac162-5546-4be5-a204-7c04581f7d1b" (UID: "92bac162-5546-4be5-a204-7c04581f7d1b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.808803 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bac162-5546-4be5-a204-7c04581f7d1b-ceph" (OuterVolumeSpecName: "ceph") pod "92bac162-5546-4be5-a204-7c04581f7d1b" (UID: "92bac162-5546-4be5-a204-7c04581f7d1b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.811435 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bac162-5546-4be5-a204-7c04581f7d1b-kube-api-access-kkv2f" (OuterVolumeSpecName: "kube-api-access-kkv2f") pod "92bac162-5546-4be5-a204-7c04581f7d1b" (UID: "92bac162-5546-4be5-a204-7c04581f7d1b"). InnerVolumeSpecName "kube-api-access-kkv2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.811539 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-scripts" (OuterVolumeSpecName: "scripts") pod "92bac162-5546-4be5-a204-7c04581f7d1b" (UID: "92bac162-5546-4be5-a204-7c04581f7d1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.817991 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "92bac162-5546-4be5-a204-7c04581f7d1b" (UID: "92bac162-5546-4be5-a204-7c04581f7d1b"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.834196 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92bac162-5546-4be5-a204-7c04581f7d1b" (UID: "92bac162-5546-4be5-a204-7c04581f7d1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.853909 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.861995 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-config-data" (OuterVolumeSpecName: "config-data") pod "92bac162-5546-4be5-a204-7c04581f7d1b" (UID: "92bac162-5546-4be5-a204-7c04581f7d1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.866461 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "92bac162-5546-4be5-a204-7c04581f7d1b" (UID: "92bac162-5546-4be5-a204-7c04581f7d1b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.875321 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.906979 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.907015 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/92bac162-5546-4be5-a204-7c04581f7d1b-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.907026 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkv2f\" (UniqueName: \"kubernetes.io/projected/92bac162-5546-4be5-a204-7c04581f7d1b-kube-api-access-kkv2f\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.907039 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bac162-5546-4be5-a204-7c04581f7d1b-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.907050 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.907085 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.907096 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92bac162-5546-4be5-a204-7c04581f7d1b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.907104 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.907113 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92bac162-5546-4be5-a204-7c04581f7d1b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:28 crc kubenswrapper[4768]: I1124 18:43:28.928558 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.010397 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.029230 4768 generic.go:334] "Generic (PLEG): container finished" podID="974510b5-1952-4c05-b1af-ffade25e7787" containerID="f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5" exitCode=0 Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.029261 4768 generic.go:334] "Generic (PLEG): container finished" podID="974510b5-1952-4c05-b1af-ffade25e7787" containerID="0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1" exitCode=143 Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.029328 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.029331 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"974510b5-1952-4c05-b1af-ffade25e7787","Type":"ContainerDied","Data":"f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5"} Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.029423 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"974510b5-1952-4c05-b1af-ffade25e7787","Type":"ContainerDied","Data":"0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1"} Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.029444 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"974510b5-1952-4c05-b1af-ffade25e7787","Type":"ContainerDied","Data":"8cd1d3a45a9fe1d2dd4ebbe0afe72b6c5b5cf516aed3d4749a43525a6edccd4a"} Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.029467 4768 scope.go:117] "RemoveContainer" containerID="f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.031069 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85f468447b-zhvc8" event={"ID":"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274","Type":"ContainerStarted","Data":"f74ea48adc186272555415a16ceee26e001c9982b777bf8b7faafabf29b26acc"} Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.033932 4768 generic.go:334] "Generic (PLEG): container finished" podID="92bac162-5546-4be5-a204-7c04581f7d1b" containerID="6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d" exitCode=0 Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.033958 4768 generic.go:334] "Generic (PLEG): container finished" podID="92bac162-5546-4be5-a204-7c04581f7d1b" containerID="c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726" exitCode=143 Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.033974 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92bac162-5546-4be5-a204-7c04581f7d1b","Type":"ContainerDied","Data":"6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d"} Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.034006 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.034028 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92bac162-5546-4be5-a204-7c04581f7d1b","Type":"ContainerDied","Data":"c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726"} Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.034043 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92bac162-5546-4be5-a204-7c04581f7d1b","Type":"ContainerDied","Data":"7fdf93f0de3a8337c169d67f106d4a98bebdc3032934133794699b76a865e6fd"} Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.077696 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.087182 4768 scope.go:117] "RemoveContainer" containerID="0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.107098 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.116644 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.142584 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 18:43:29 crc kubenswrapper[4768]: E1124 18:43:29.143138 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="974510b5-1952-4c05-b1af-ffade25e7787" containerName="glance-log" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.143161 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="974510b5-1952-4c05-b1af-ffade25e7787" containerName="glance-log" Nov 24 18:43:29 crc kubenswrapper[4768]: E1124 18:43:29.143187 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bac162-5546-4be5-a204-7c04581f7d1b" containerName="glance-httpd" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.143195 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bac162-5546-4be5-a204-7c04581f7d1b" containerName="glance-httpd" Nov 24 18:43:29 crc kubenswrapper[4768]: E1124 18:43:29.143212 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ebe595-8584-4ad6-a043-b2df4d7cef79" containerName="mariadb-database-create" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.143220 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ebe595-8584-4ad6-a043-b2df4d7cef79" containerName="mariadb-database-create" Nov 24 18:43:29 crc kubenswrapper[4768]: E1124 18:43:29.143240 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bac162-5546-4be5-a204-7c04581f7d1b" containerName="glance-log" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.143248 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bac162-5546-4be5-a204-7c04581f7d1b" containerName="glance-log" Nov 24 18:43:29 crc kubenswrapper[4768]: E1124 18:43:29.143273 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9eb5f31-ed6d-43b8-920a-9d6767e66382" containerName="mariadb-account-create" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.143281 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9eb5f31-ed6d-43b8-920a-9d6767e66382" containerName="mariadb-account-create" Nov 24 18:43:29 crc kubenswrapper[4768]: E1124 18:43:29.143301 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="974510b5-1952-4c05-b1af-ffade25e7787" containerName="glance-httpd" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.143308 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="974510b5-1952-4c05-b1af-ffade25e7787" containerName="glance-httpd" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.143559 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bac162-5546-4be5-a204-7c04581f7d1b" containerName="glance-httpd" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.143588 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9eb5f31-ed6d-43b8-920a-9d6767e66382" containerName="mariadb-account-create" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.143607 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ebe595-8584-4ad6-a043-b2df4d7cef79" containerName="mariadb-database-create" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.143619 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="974510b5-1952-4c05-b1af-ffade25e7787" containerName="glance-httpd" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.143627 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="974510b5-1952-4c05-b1af-ffade25e7787" containerName="glance-log" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.143641 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bac162-5546-4be5-a204-7c04581f7d1b" containerName="glance-log" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.145725 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.152749 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.153212 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.153420 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.153586 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-t2kxl" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.153805 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.157556 4768 scope.go:117] "RemoveContainer" containerID="f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.160303 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 18:43:29 crc kubenswrapper[4768]: E1124 18:43:29.161252 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5\": container with ID starting with f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5 not found: ID does not exist" containerID="f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.161279 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5"} err="failed to get container status \"f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5\": rpc error: code = NotFound desc = could not find container \"f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5\": container with ID starting with f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5 not found: ID does not exist" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.161308 4768 scope.go:117] "RemoveContainer" containerID="0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1" Nov 24 18:43:29 crc kubenswrapper[4768]: E1124 18:43:29.166778 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1\": container with ID starting with 0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1 not found: ID does not exist" containerID="0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.166823 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1"} err="failed to get container status \"0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1\": rpc error: code = NotFound desc = could not find container \"0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1\": container with ID starting with 0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1 not found: ID does not exist" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.166849 4768 scope.go:117] "RemoveContainer" containerID="f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.173615 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.176672 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.179324 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5"} err="failed to get container status \"f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5\": rpc error: code = NotFound desc = could not find container \"f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5\": container with ID starting with f1c02742978b0206862fa4065c849fdb23f68fb40283ad96460d17dcf3adcdf5 not found: ID does not exist" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.179386 4768 scope.go:117] "RemoveContainer" containerID="0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.179520 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.179590 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.180226 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1"} err="failed to get container status \"0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1\": rpc error: code = NotFound desc = could not find container \"0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1\": container with ID starting with 0f207050850e365897679f7927444031f5746a11ca4d4a8c566a48478554fba1 not found: ID does not exist" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.180286 4768 scope.go:117] "RemoveContainer" containerID="6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.186038 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.213847 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht27l\" (UniqueName: \"kubernetes.io/projected/c7d82efd-27b9-4b06-a476-230d3dbbb176-kube-api-access-ht27l\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.213903 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.213931 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.214002 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glcs4\" (UniqueName: \"kubernetes.io/projected/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-kube-api-access-glcs4\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.214027 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.214041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.214068 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7d82efd-27b9-4b06-a476-230d3dbbb176-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.214099 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7d82efd-27b9-4b06-a476-230d3dbbb176-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.214118 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7d82efd-27b9-4b06-a476-230d3dbbb176-logs\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.214138 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.214895 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d82efd-27b9-4b06-a476-230d3dbbb176-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.214942 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-ceph\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.214983 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7d82efd-27b9-4b06-a476-230d3dbbb176-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.215000 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c7d82efd-27b9-4b06-a476-230d3dbbb176-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.215050 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.215442 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d82efd-27b9-4b06-a476-230d3dbbb176-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.215646 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.215787 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-logs\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.216834 4768 scope.go:117] "RemoveContainer" containerID="c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.245858 4768 scope.go:117] "RemoveContainer" containerID="6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d" Nov 24 18:43:29 crc kubenswrapper[4768]: E1124 18:43:29.247806 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d\": container with ID starting with 6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d not found: ID does not exist" containerID="6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.247850 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d"} err="failed to get container status \"6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d\": rpc error: code = NotFound desc = could not find container \"6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d\": container with ID starting with 6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d not found: ID does not exist" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.247876 4768 scope.go:117] "RemoveContainer" containerID="c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726" Nov 24 18:43:29 crc kubenswrapper[4768]: E1124 18:43:29.248161 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726\": container with ID starting with c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726 not found: ID does not exist" containerID="c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.248183 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726"} err="failed to get container status \"c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726\": rpc error: code = NotFound desc = could not find container \"c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726\": container with ID starting with c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726 not found: ID does not exist" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.248197 4768 scope.go:117] "RemoveContainer" containerID="6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.248525 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d"} err="failed to get container status \"6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d\": rpc error: code = NotFound desc = could not find container \"6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d\": container with ID starting with 6262eb2856df14d336c5145851073312de583304a24681391b7a8b6ae749ee9d not found: ID does not exist" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.248546 4768 scope.go:117] "RemoveContainer" containerID="c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.248934 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726"} err="failed to get container status \"c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726\": rpc error: code = NotFound desc = could not find container \"c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726\": container with ID starting with c051c79dfbf91eda8517d956ec87db9f8388ba242386f994505dbca3819f7726 not found: ID does not exist" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.317760 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.317803 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-logs\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.317828 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht27l\" (UniqueName: \"kubernetes.io/projected/c7d82efd-27b9-4b06-a476-230d3dbbb176-kube-api-access-ht27l\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.317857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.317883 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.317915 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glcs4\" (UniqueName: \"kubernetes.io/projected/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-kube-api-access-glcs4\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.317934 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.317951 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.317976 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7d82efd-27b9-4b06-a476-230d3dbbb176-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.318005 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7d82efd-27b9-4b06-a476-230d3dbbb176-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.318027 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7d82efd-27b9-4b06-a476-230d3dbbb176-logs\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.318047 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.318063 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d82efd-27b9-4b06-a476-230d3dbbb176-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.318086 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-ceph\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.318121 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7d82efd-27b9-4b06-a476-230d3dbbb176-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.318135 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c7d82efd-27b9-4b06-a476-230d3dbbb176-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.318176 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.318197 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d82efd-27b9-4b06-a476-230d3dbbb176-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.319004 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.319846 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-logs\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.320247 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7d82efd-27b9-4b06-a476-230d3dbbb176-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.320937 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7d82efd-27b9-4b06-a476-230d3dbbb176-logs\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.321273 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.324394 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.328268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d82efd-27b9-4b06-a476-230d3dbbb176-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.329579 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c7d82efd-27b9-4b06-a476-230d3dbbb176-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.335415 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht27l\" (UniqueName: \"kubernetes.io/projected/c7d82efd-27b9-4b06-a476-230d3dbbb176-kube-api-access-ht27l\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.335795 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7d82efd-27b9-4b06-a476-230d3dbbb176-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.336561 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d82efd-27b9-4b06-a476-230d3dbbb176-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.336604 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.337321 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.337431 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glcs4\" (UniqueName: \"kubernetes.io/projected/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-kube-api-access-glcs4\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.339268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7d82efd-27b9-4b06-a476-230d3dbbb176-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.343848 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-ceph\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.343900 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.357944 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.367089 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9\") " pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.382821 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c7d82efd-27b9-4b06-a476-230d3dbbb176\") " pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.473904 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.510215 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.553633 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-867w9"] Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.556044 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.558085 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-tdkmv" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.560228 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.565157 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-867w9"] Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.626296 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd7g5\" (UniqueName: \"kubernetes.io/projected/6f09743b-4494-416b-98c3-2bfe275c366c-kube-api-access-xd7g5\") pod \"manila-db-sync-867w9\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.626458 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-combined-ca-bundle\") pod \"manila-db-sync-867w9\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.626536 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-job-config-data\") pod \"manila-db-sync-867w9\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.626565 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-config-data\") pod \"manila-db-sync-867w9\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.728150 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-combined-ca-bundle\") pod \"manila-db-sync-867w9\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.728227 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-job-config-data\") pod \"manila-db-sync-867w9\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.728245 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-config-data\") pod \"manila-db-sync-867w9\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.728321 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd7g5\" (UniqueName: \"kubernetes.io/projected/6f09743b-4494-416b-98c3-2bfe275c366c-kube-api-access-xd7g5\") pod \"manila-db-sync-867w9\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.733821 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-combined-ca-bundle\") pod \"manila-db-sync-867w9\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.735513 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-job-config-data\") pod \"manila-db-sync-867w9\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.736528 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-config-data\") pod \"manila-db-sync-867w9\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.747314 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd7g5\" (UniqueName: \"kubernetes.io/projected/6f09743b-4494-416b-98c3-2bfe275c366c-kube-api-access-xd7g5\") pod \"manila-db-sync-867w9\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.891182 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-867w9" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.914874 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92bac162-5546-4be5-a204-7c04581f7d1b" path="/var/lib/kubelet/pods/92bac162-5546-4be5-a204-7c04581f7d1b/volumes" Nov 24 18:43:29 crc kubenswrapper[4768]: I1124 18:43:29.915778 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="974510b5-1952-4c05-b1af-ffade25e7787" path="/var/lib/kubelet/pods/974510b5-1952-4c05-b1af-ffade25e7787/volumes" Nov 24 18:43:33 crc kubenswrapper[4768]: I1124 18:43:33.899024 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:43:33 crc kubenswrapper[4768]: E1124 18:43:33.900047 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:43:34 crc kubenswrapper[4768]: I1124 18:43:34.105223 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 24 18:43:34 crc kubenswrapper[4768]: I1124 18:43:34.166172 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 24 18:43:34 crc kubenswrapper[4768]: I1124 18:43:34.815086 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 18:43:34 crc kubenswrapper[4768]: W1124 18:43:34.817026 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dfeeb13_ec6b_432a_9aa4_d3a0ee4d61c9.slice/crio-d3ec464c84d4ae9254795967477cd608f8ca1184314cc2a688c594a907ea331c WatchSource:0}: Error finding container d3ec464c84d4ae9254795967477cd608f8ca1184314cc2a688c594a907ea331c: Status 404 returned error can't find the container with id d3ec464c84d4ae9254795967477cd608f8ca1184314cc2a688c594a907ea331c Nov 24 18:43:34 crc kubenswrapper[4768]: I1124 18:43:34.899656 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-867w9"] Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.004570 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 18:43:35 crc kubenswrapper[4768]: W1124 18:43:35.010301 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7d82efd_27b9_4b06_a476_230d3dbbb176.slice/crio-8bfd39a9fb9a5e4f3d1dc9cb3d3b25afa55666f5202f5714abc48820bcff25e1 WatchSource:0}: Error finding container 8bfd39a9fb9a5e4f3d1dc9cb3d3b25afa55666f5202f5714abc48820bcff25e1: Status 404 returned error can't find the container with id 8bfd39a9fb9a5e4f3d1dc9cb3d3b25afa55666f5202f5714abc48820bcff25e1 Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.110908 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-677bdf55b9-f4t6m" event={"ID":"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd","Type":"ContainerStarted","Data":"09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c"} Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.110964 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-677bdf55b9-f4t6m" event={"ID":"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd","Type":"ContainerStarted","Data":"1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc"} Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.111054 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-677bdf55b9-f4t6m" podUID="4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" containerName="horizon-log" containerID="cri-o://1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc" gracePeriod=30 Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.111189 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-677bdf55b9-f4t6m" podUID="4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" containerName="horizon" containerID="cri-o://09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c" gracePeriod=30 Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.121590 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7d82efd-27b9-4b06-a476-230d3dbbb176","Type":"ContainerStarted","Data":"8bfd39a9fb9a5e4f3d1dc9cb3d3b25afa55666f5202f5714abc48820bcff25e1"} Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.123893 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-685ddbdf68-6mjzl" event={"ID":"375f8ae8-797c-40c7-bd90-93b3538ff9aa","Type":"ContainerStarted","Data":"9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0"} Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.123927 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-685ddbdf68-6mjzl" event={"ID":"375f8ae8-797c-40c7-bd90-93b3538ff9aa","Type":"ContainerStarted","Data":"dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f"} Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.136364 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9","Type":"ContainerStarted","Data":"d3ec464c84d4ae9254795967477cd608f8ca1184314cc2a688c594a907ea331c"} Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.137932 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-677bdf55b9-f4t6m" podStartSLOduration=1.676863396 podStartE2EDuration="11.137887904s" podCreationTimestamp="2025-11-24 18:43:24 +0000 UTC" firstStartedPulling="2025-11-24 18:43:24.839766771 +0000 UTC m=+3243.700348538" lastFinishedPulling="2025-11-24 18:43:34.300791269 +0000 UTC m=+3253.161373046" observedRunningTime="2025-11-24 18:43:35.129863898 +0000 UTC m=+3253.990445675" watchObservedRunningTime="2025-11-24 18:43:35.137887904 +0000 UTC m=+3253.998469681" Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.146136 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cd66787c-cg7lk" event={"ID":"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76","Type":"ContainerStarted","Data":"db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c"} Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.146220 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cd66787c-cg7lk" event={"ID":"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76","Type":"ContainerStarted","Data":"b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7"} Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.146230 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cd66787c-cg7lk" podUID="bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" containerName="horizon-log" containerID="cri-o://b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7" gracePeriod=30 Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.146322 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cd66787c-cg7lk" podUID="bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" containerName="horizon" containerID="cri-o://db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c" gracePeriod=30 Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.157879 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-867w9" event={"ID":"6f09743b-4494-416b-98c3-2bfe275c366c","Type":"ContainerStarted","Data":"8c74ed7a3a800768c5a919cc9f37d9e9582298f8846cd2970a58393a983f7859"} Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.165290 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85f468447b-zhvc8" event={"ID":"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274","Type":"ContainerStarted","Data":"c9f808e4f5b68fe0901a24d56590822a1f56ea529113604c6c060be29eb60787"} Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.165322 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85f468447b-zhvc8" event={"ID":"cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274","Type":"ContainerStarted","Data":"7733c8c0c76ff7508aac6926a7ec0c44359b92251be61975521735adb2e464f4"} Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.167268 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-685ddbdf68-6mjzl" podStartSLOduration=2.69681959 podStartE2EDuration="9.167243595s" podCreationTimestamp="2025-11-24 18:43:26 +0000 UTC" firstStartedPulling="2025-11-24 18:43:27.847901217 +0000 UTC m=+3246.708482994" lastFinishedPulling="2025-11-24 18:43:34.318325222 +0000 UTC m=+3253.178906999" observedRunningTime="2025-11-24 18:43:35.153035432 +0000 UTC m=+3254.013617209" watchObservedRunningTime="2025-11-24 18:43:35.167243595 +0000 UTC m=+3254.027825372" Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.181421 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5cd66787c-cg7lk" podStartSLOduration=2.268646487 podStartE2EDuration="11.181389546s" podCreationTimestamp="2025-11-24 18:43:24 +0000 UTC" firstStartedPulling="2025-11-24 18:43:25.447080211 +0000 UTC m=+3244.307661988" lastFinishedPulling="2025-11-24 18:43:34.35982327 +0000 UTC m=+3253.220405047" observedRunningTime="2025-11-24 18:43:35.170955636 +0000 UTC m=+3254.031537413" watchObservedRunningTime="2025-11-24 18:43:35.181389546 +0000 UTC m=+3254.041971333" Nov 24 18:43:35 crc kubenswrapper[4768]: I1124 18:43:35.206122 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-85f468447b-zhvc8" podStartSLOduration=2.895980357 podStartE2EDuration="9.205980139s" podCreationTimestamp="2025-11-24 18:43:26 +0000 UTC" firstStartedPulling="2025-11-24 18:43:28.003664075 +0000 UTC m=+3246.864245852" lastFinishedPulling="2025-11-24 18:43:34.313663857 +0000 UTC m=+3253.174245634" observedRunningTime="2025-11-24 18:43:35.194719176 +0000 UTC m=+3254.055300953" watchObservedRunningTime="2025-11-24 18:43:35.205980139 +0000 UTC m=+3254.066561916" Nov 24 18:43:36 crc kubenswrapper[4768]: I1124 18:43:36.201568 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7d82efd-27b9-4b06-a476-230d3dbbb176","Type":"ContainerStarted","Data":"a841c54456dedd979e8d244cad203c40d4e791e6e7fd09dff5b5aa4c5dd37ca5"} Nov 24 18:43:36 crc kubenswrapper[4768]: I1124 18:43:36.208293 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9","Type":"ContainerStarted","Data":"28e1393f3e8b41a6bc375241f509248e97fa851dfc41e8a5761250698c907a7b"} Nov 24 18:43:37 crc kubenswrapper[4768]: I1124 18:43:37.190858 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:37 crc kubenswrapper[4768]: I1124 18:43:37.191157 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:37 crc kubenswrapper[4768]: I1124 18:43:37.220236 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9","Type":"ContainerStarted","Data":"df37ac50a5e093ecc122f6f39d18ac6dd7616c1fdfd49120ee05fadddd4fd6c5"} Nov 24 18:43:37 crc kubenswrapper[4768]: I1124 18:43:37.223827 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7d82efd-27b9-4b06-a476-230d3dbbb176","Type":"ContainerStarted","Data":"ed667d51a203978bc85739d4edb3d4def63e472f2d100ee75cc8f7641c6a08fa"} Nov 24 18:43:37 crc kubenswrapper[4768]: I1124 18:43:37.240972 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.240943924 podStartE2EDuration="8.240943924s" podCreationTimestamp="2025-11-24 18:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:43:37.237716366 +0000 UTC m=+3256.098298153" watchObservedRunningTime="2025-11-24 18:43:37.240943924 +0000 UTC m=+3256.101525701" Nov 24 18:43:37 crc kubenswrapper[4768]: I1124 18:43:37.269243 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.269217006 podStartE2EDuration="8.269217006s" podCreationTimestamp="2025-11-24 18:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:43:37.261925599 +0000 UTC m=+3256.122507376" watchObservedRunningTime="2025-11-24 18:43:37.269217006 +0000 UTC m=+3256.129798783" Nov 24 18:43:37 crc kubenswrapper[4768]: I1124 18:43:37.342354 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:37 crc kubenswrapper[4768]: I1124 18:43:37.342418 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:39 crc kubenswrapper[4768]: I1124 18:43:39.474498 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 18:43:39 crc kubenswrapper[4768]: I1124 18:43:39.474867 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 18:43:39 crc kubenswrapper[4768]: I1124 18:43:39.511165 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:39 crc kubenswrapper[4768]: I1124 18:43:39.511557 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:39 crc kubenswrapper[4768]: I1124 18:43:39.538116 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 18:43:39 crc kubenswrapper[4768]: I1124 18:43:39.549546 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 18:43:39 crc kubenswrapper[4768]: I1124 18:43:39.568337 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:39 crc kubenswrapper[4768]: I1124 18:43:39.592802 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:40 crc kubenswrapper[4768]: I1124 18:43:40.248795 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:40 crc kubenswrapper[4768]: I1124 18:43:40.250530 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 18:43:40 crc kubenswrapper[4768]: I1124 18:43:40.250623 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:40 crc kubenswrapper[4768]: I1124 18:43:40.250698 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 18:43:41 crc kubenswrapper[4768]: I1124 18:43:41.259719 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-867w9" event={"ID":"6f09743b-4494-416b-98c3-2bfe275c366c","Type":"ContainerStarted","Data":"6dec84c3a33543f5fb68adabd566b6c0190c109424caffcc500b6fc58a829261"} Nov 24 18:43:42 crc kubenswrapper[4768]: I1124 18:43:42.269709 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 18:43:42 crc kubenswrapper[4768]: I1124 18:43:42.271043 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 18:43:42 crc kubenswrapper[4768]: I1124 18:43:42.313566 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:42 crc kubenswrapper[4768]: I1124 18:43:42.314504 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 18:43:42 crc kubenswrapper[4768]: I1124 18:43:42.319617 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 18:43:42 crc kubenswrapper[4768]: I1124 18:43:42.356081 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-867w9" podStartSLOduration=7.882381268 podStartE2EDuration="13.356046555s" podCreationTimestamp="2025-11-24 18:43:29 +0000 UTC" firstStartedPulling="2025-11-24 18:43:34.912277782 +0000 UTC m=+3253.772859549" lastFinishedPulling="2025-11-24 18:43:40.385943059 +0000 UTC m=+3259.246524836" observedRunningTime="2025-11-24 18:43:41.298840137 +0000 UTC m=+3260.159421914" watchObservedRunningTime="2025-11-24 18:43:42.356046555 +0000 UTC m=+3261.216628332" Nov 24 18:43:43 crc kubenswrapper[4768]: I1124 18:43:43.277587 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 18:43:43 crc kubenswrapper[4768]: I1124 18:43:43.570756 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 18:43:44 crc kubenswrapper[4768]: I1124 18:43:44.380384 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:43:44 crc kubenswrapper[4768]: I1124 18:43:44.855012 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:43:44 crc kubenswrapper[4768]: I1124 18:43:44.898250 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:43:45 crc kubenswrapper[4768]: I1124 18:43:45.299257 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"9eb401ad1b0ef5f0f1ac2c17170a6fef38691d5809ef5a0b3d5a468c793d8b00"} Nov 24 18:43:47 crc kubenswrapper[4768]: I1124 18:43:47.193798 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-685ddbdf68-6mjzl" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Nov 24 18:43:47 crc kubenswrapper[4768]: I1124 18:43:47.344024 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-85f468447b-zhvc8" podUID="cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Nov 24 18:43:51 crc kubenswrapper[4768]: I1124 18:43:51.367227 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-867w9" event={"ID":"6f09743b-4494-416b-98c3-2bfe275c366c","Type":"ContainerDied","Data":"6dec84c3a33543f5fb68adabd566b6c0190c109424caffcc500b6fc58a829261"} Nov 24 18:43:51 crc kubenswrapper[4768]: I1124 18:43:51.367163 4768 generic.go:334] "Generic (PLEG): container finished" podID="6f09743b-4494-416b-98c3-2bfe275c366c" containerID="6dec84c3a33543f5fb68adabd566b6c0190c109424caffcc500b6fc58a829261" exitCode=0 Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.876446 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-867w9" Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.884321 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd7g5\" (UniqueName: \"kubernetes.io/projected/6f09743b-4494-416b-98c3-2bfe275c366c-kube-api-access-xd7g5\") pod \"6f09743b-4494-416b-98c3-2bfe275c366c\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.884398 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-config-data\") pod \"6f09743b-4494-416b-98c3-2bfe275c366c\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.884426 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-combined-ca-bundle\") pod \"6f09743b-4494-416b-98c3-2bfe275c366c\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.884482 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-job-config-data\") pod \"6f09743b-4494-416b-98c3-2bfe275c366c\" (UID: \"6f09743b-4494-416b-98c3-2bfe275c366c\") " Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.895726 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "6f09743b-4494-416b-98c3-2bfe275c366c" (UID: "6f09743b-4494-416b-98c3-2bfe275c366c"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.908473 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f09743b-4494-416b-98c3-2bfe275c366c-kube-api-access-xd7g5" (OuterVolumeSpecName: "kube-api-access-xd7g5") pod "6f09743b-4494-416b-98c3-2bfe275c366c" (UID: "6f09743b-4494-416b-98c3-2bfe275c366c"). InnerVolumeSpecName "kube-api-access-xd7g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.914569 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-config-data" (OuterVolumeSpecName: "config-data") pod "6f09743b-4494-416b-98c3-2bfe275c366c" (UID: "6f09743b-4494-416b-98c3-2bfe275c366c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.942587 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f09743b-4494-416b-98c3-2bfe275c366c" (UID: "6f09743b-4494-416b-98c3-2bfe275c366c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.987883 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd7g5\" (UniqueName: \"kubernetes.io/projected/6f09743b-4494-416b-98c3-2bfe275c366c-kube-api-access-xd7g5\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.987925 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.987938 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:52 crc kubenswrapper[4768]: I1124 18:43:52.987981 4768 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6f09743b-4494-416b-98c3-2bfe275c366c-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.395396 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-867w9" event={"ID":"6f09743b-4494-416b-98c3-2bfe275c366c","Type":"ContainerDied","Data":"8c74ed7a3a800768c5a919cc9f37d9e9582298f8846cd2970a58393a983f7859"} Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.395793 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c74ed7a3a800768c5a919cc9f37d9e9582298f8846cd2970a58393a983f7859" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.395464 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-867w9" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.775320 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 18:43:53 crc kubenswrapper[4768]: E1124 18:43:53.775973 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f09743b-4494-416b-98c3-2bfe275c366c" containerName="manila-db-sync" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.775991 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f09743b-4494-416b-98c3-2bfe275c366c" containerName="manila-db-sync" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.776208 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f09743b-4494-416b-98c3-2bfe275c366c" containerName="manila-db-sync" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.779071 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.787921 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.788107 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.788479 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-tdkmv" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.788659 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.805189 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.841053 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.844134 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.849219 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.880017 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.891500 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-l8v2f"] Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.893675 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908055 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-scripts\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908101 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9066febc-fa33-4d85-954b-5533708e7e9d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908130 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908158 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908181 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vmxv\" (UniqueName: \"kubernetes.io/projected/f2833042-f8cd-458f-b1e9-dd1998838efd-kube-api-access-2vmxv\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908230 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-config\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908252 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908279 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908303 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-config-data\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908331 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908352 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908380 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9066febc-fa33-4d85-954b-5533708e7e9d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908404 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5vqb\" (UniqueName: \"kubernetes.io/projected/9066febc-fa33-4d85-954b-5533708e7e9d-kube-api-access-v5vqb\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908565 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrrqn\" (UniqueName: \"kubernetes.io/projected/841499fa-7a48-465c-891c-13987e5064d5-kube-api-access-mrrqn\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908699 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-scripts\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908822 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2833042-f8cd-458f-b1e9-dd1998838efd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908914 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9066febc-fa33-4d85-954b-5533708e7e9d-ceph\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.908987 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.909021 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-config-data\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.909041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:53 crc kubenswrapper[4768]: I1124 18:43:53.940111 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-l8v2f"] Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013365 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vmxv\" (UniqueName: \"kubernetes.io/projected/f2833042-f8cd-458f-b1e9-dd1998838efd-kube-api-access-2vmxv\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013416 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-config\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013450 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013479 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013518 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-config-data\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013549 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013567 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013599 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9066febc-fa33-4d85-954b-5533708e7e9d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013617 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5vqb\" (UniqueName: \"kubernetes.io/projected/9066febc-fa33-4d85-954b-5533708e7e9d-kube-api-access-v5vqb\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013637 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrrqn\" (UniqueName: \"kubernetes.io/projected/841499fa-7a48-465c-891c-13987e5064d5-kube-api-access-mrrqn\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013674 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-scripts\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013718 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2833042-f8cd-458f-b1e9-dd1998838efd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013743 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9066febc-fa33-4d85-954b-5533708e7e9d-ceph\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013772 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013791 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-config-data\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013807 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013860 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-scripts\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013875 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9066febc-fa33-4d85-954b-5533708e7e9d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013896 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.013921 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.015577 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-config\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.022613 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.022668 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2833042-f8cd-458f-b1e9-dd1998838efd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.030220 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9066febc-fa33-4d85-954b-5533708e7e9d-ceph\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.030372 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-config-data\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.030992 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.031036 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9066febc-fa33-4d85-954b-5533708e7e9d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.031286 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.032021 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/841499fa-7a48-465c-891c-13987e5064d5-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.032392 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9066febc-fa33-4d85-954b-5533708e7e9d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.034245 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.035707 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-scripts\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.038314 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.044877 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vmxv\" (UniqueName: \"kubernetes.io/projected/f2833042-f8cd-458f-b1e9-dd1998838efd-kube-api-access-2vmxv\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.046503 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-scripts\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.047086 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-config-data\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.051015 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.070762 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrrqn\" (UniqueName: \"kubernetes.io/projected/841499fa-7a48-465c-891c-13987e5064d5-kube-api-access-mrrqn\") pod \"dnsmasq-dns-76b5fdb995-l8v2f\" (UID: \"841499fa-7a48-465c-891c-13987e5064d5\") " pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.072355 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.084388 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.088081 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.090916 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5vqb\" (UniqueName: \"kubernetes.io/projected/9066febc-fa33-4d85-954b-5533708e7e9d-kube-api-access-v5vqb\") pod \"manila-share-share1-0\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.092843 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.106996 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.116708 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.122663 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-config-data\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.122808 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/162f3c9a-6a5c-4617-b30a-90ac9ac22825-logs\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.122883 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knvb6\" (UniqueName: \"kubernetes.io/projected/162f3c9a-6a5c-4617-b30a-90ac9ac22825-kube-api-access-knvb6\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.122935 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-scripts\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.123009 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.123050 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-config-data-custom\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.123149 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/162f3c9a-6a5c-4617-b30a-90ac9ac22825-etc-machine-id\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.173504 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.224201 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/162f3c9a-6a5c-4617-b30a-90ac9ac22825-etc-machine-id\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.224297 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-config-data\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.224339 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/162f3c9a-6a5c-4617-b30a-90ac9ac22825-logs\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.224372 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knvb6\" (UniqueName: \"kubernetes.io/projected/162f3c9a-6a5c-4617-b30a-90ac9ac22825-kube-api-access-knvb6\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.224401 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-scripts\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.224437 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.224457 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-config-data-custom\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.225686 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/162f3c9a-6a5c-4617-b30a-90ac9ac22825-logs\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.226018 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/162f3c9a-6a5c-4617-b30a-90ac9ac22825-etc-machine-id\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.232175 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-scripts\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.234513 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-config-data\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.235068 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.235464 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-config-data-custom\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.246057 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.255864 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knvb6\" (UniqueName: \"kubernetes.io/projected/162f3c9a-6a5c-4617-b30a-90ac9ac22825-kube-api-access-knvb6\") pod \"manila-api-0\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.332442 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.759587 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 18:43:54 crc kubenswrapper[4768]: I1124 18:43:54.841686 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-l8v2f"] Nov 24 18:43:55 crc kubenswrapper[4768]: I1124 18:43:55.128209 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 18:43:55 crc kubenswrapper[4768]: W1124 18:43:55.147359 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9066febc_fa33_4d85_954b_5533708e7e9d.slice/crio-69e26beac49b82d143cc86abfe3617cf6a742257e66edbccd89c74f8d1178ee4 WatchSource:0}: Error finding container 69e26beac49b82d143cc86abfe3617cf6a742257e66edbccd89c74f8d1178ee4: Status 404 returned error can't find the container with id 69e26beac49b82d143cc86abfe3617cf6a742257e66edbccd89c74f8d1178ee4 Nov 24 18:43:55 crc kubenswrapper[4768]: I1124 18:43:55.165231 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 24 18:43:55 crc kubenswrapper[4768]: I1124 18:43:55.455900 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9066febc-fa33-4d85-954b-5533708e7e9d","Type":"ContainerStarted","Data":"69e26beac49b82d143cc86abfe3617cf6a742257e66edbccd89c74f8d1178ee4"} Nov 24 18:43:55 crc kubenswrapper[4768]: I1124 18:43:55.457628 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f2833042-f8cd-458f-b1e9-dd1998838efd","Type":"ContainerStarted","Data":"cb1dcdda64163bf48d3c6ad9fa6e5e4541e6d871af81e8626cc9ba26c5a2981c"} Nov 24 18:43:55 crc kubenswrapper[4768]: I1124 18:43:55.459928 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"162f3c9a-6a5c-4617-b30a-90ac9ac22825","Type":"ContainerStarted","Data":"ad76f411ca3ce3313b097656bb548abb4f77a86653fc562ff372c40ee60ead0b"} Nov 24 18:43:55 crc kubenswrapper[4768]: I1124 18:43:55.462247 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" event={"ID":"841499fa-7a48-465c-891c-13987e5064d5","Type":"ContainerStarted","Data":"3b71de799ad5f347862cce6be8d7ab4cd60e38ec3a40e1093f152d0756bf9cc3"} Nov 24 18:43:55 crc kubenswrapper[4768]: I1124 18:43:55.462290 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" event={"ID":"841499fa-7a48-465c-891c-13987e5064d5","Type":"ContainerStarted","Data":"a6ccd5b293e5c3e168084c48c9740e0cd964277a296dcff3b18c25bc84623e2a"} Nov 24 18:43:56 crc kubenswrapper[4768]: I1124 18:43:56.478761 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"162f3c9a-6a5c-4617-b30a-90ac9ac22825","Type":"ContainerStarted","Data":"76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e"} Nov 24 18:43:56 crc kubenswrapper[4768]: I1124 18:43:56.481116 4768 generic.go:334] "Generic (PLEG): container finished" podID="841499fa-7a48-465c-891c-13987e5064d5" containerID="3b71de799ad5f347862cce6be8d7ab4cd60e38ec3a40e1093f152d0756bf9cc3" exitCode=0 Nov 24 18:43:56 crc kubenswrapper[4768]: I1124 18:43:56.481187 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" event={"ID":"841499fa-7a48-465c-891c-13987e5064d5","Type":"ContainerDied","Data":"3b71de799ad5f347862cce6be8d7ab4cd60e38ec3a40e1093f152d0756bf9cc3"} Nov 24 18:43:56 crc kubenswrapper[4768]: I1124 18:43:56.481220 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" event={"ID":"841499fa-7a48-465c-891c-13987e5064d5","Type":"ContainerStarted","Data":"a7bd356725c2edcd91cd1fde4f9125f63cb4c0b5d93e6756c602e95d8e041251"} Nov 24 18:43:56 crc kubenswrapper[4768]: I1124 18:43:56.481261 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:43:56 crc kubenswrapper[4768]: I1124 18:43:56.483642 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f2833042-f8cd-458f-b1e9-dd1998838efd","Type":"ContainerStarted","Data":"6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c"} Nov 24 18:43:56 crc kubenswrapper[4768]: I1124 18:43:56.503467 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" podStartSLOduration=3.503450028 podStartE2EDuration="3.503450028s" podCreationTimestamp="2025-11-24 18:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:43:56.500330574 +0000 UTC m=+3275.360912351" watchObservedRunningTime="2025-11-24 18:43:56.503450028 +0000 UTC m=+3275.364031795" Nov 24 18:43:56 crc kubenswrapper[4768]: I1124 18:43:56.853973 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 24 18:43:57 crc kubenswrapper[4768]: I1124 18:43:57.507525 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f2833042-f8cd-458f-b1e9-dd1998838efd","Type":"ContainerStarted","Data":"bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8"} Nov 24 18:43:57 crc kubenswrapper[4768]: I1124 18:43:57.510783 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"162f3c9a-6a5c-4617-b30a-90ac9ac22825","Type":"ContainerStarted","Data":"ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c"} Nov 24 18:43:57 crc kubenswrapper[4768]: I1124 18:43:57.556342 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.807209836 podStartE2EDuration="4.556323979s" podCreationTimestamp="2025-11-24 18:43:53 +0000 UTC" firstStartedPulling="2025-11-24 18:43:54.760222678 +0000 UTC m=+3273.620804455" lastFinishedPulling="2025-11-24 18:43:55.509336821 +0000 UTC m=+3274.369918598" observedRunningTime="2025-11-24 18:43:57.536177056 +0000 UTC m=+3276.396758833" watchObservedRunningTime="2025-11-24 18:43:57.556323979 +0000 UTC m=+3276.416905756" Nov 24 18:43:57 crc kubenswrapper[4768]: I1124 18:43:57.559652 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.559643819 podStartE2EDuration="3.559643819s" podCreationTimestamp="2025-11-24 18:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:43:57.552831224 +0000 UTC m=+3276.413413001" watchObservedRunningTime="2025-11-24 18:43:57.559643819 +0000 UTC m=+3276.420225596" Nov 24 18:43:58 crc kubenswrapper[4768]: I1124 18:43:58.523028 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 24 18:43:58 crc kubenswrapper[4768]: I1124 18:43:58.523131 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="162f3c9a-6a5c-4617-b30a-90ac9ac22825" containerName="manila-api" containerID="cri-o://ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c" gracePeriod=30 Nov 24 18:43:58 crc kubenswrapper[4768]: I1124 18:43:58.523602 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="162f3c9a-6a5c-4617-b30a-90ac9ac22825" containerName="manila-api-log" containerID="cri-o://76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e" gracePeriod=30 Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.217362 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.315620 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.318544 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-scripts\") pod \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.318672 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/162f3c9a-6a5c-4617-b30a-90ac9ac22825-etc-machine-id\") pod \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.318747 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/162f3c9a-6a5c-4617-b30a-90ac9ac22825-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "162f3c9a-6a5c-4617-b30a-90ac9ac22825" (UID: "162f3c9a-6a5c-4617-b30a-90ac9ac22825"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.318797 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knvb6\" (UniqueName: \"kubernetes.io/projected/162f3c9a-6a5c-4617-b30a-90ac9ac22825-kube-api-access-knvb6\") pod \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.318855 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-config-data-custom\") pod \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.319018 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-combined-ca-bundle\") pod \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.319107 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-config-data\") pod \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.319207 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/162f3c9a-6a5c-4617-b30a-90ac9ac22825-logs\") pod \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\" (UID: \"162f3c9a-6a5c-4617-b30a-90ac9ac22825\") " Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.319864 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/162f3c9a-6a5c-4617-b30a-90ac9ac22825-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.320385 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/162f3c9a-6a5c-4617-b30a-90ac9ac22825-logs" (OuterVolumeSpecName: "logs") pod "162f3c9a-6a5c-4617-b30a-90ac9ac22825" (UID: "162f3c9a-6a5c-4617-b30a-90ac9ac22825"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.329123 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/162f3c9a-6a5c-4617-b30a-90ac9ac22825-kube-api-access-knvb6" (OuterVolumeSpecName: "kube-api-access-knvb6") pod "162f3c9a-6a5c-4617-b30a-90ac9ac22825" (UID: "162f3c9a-6a5c-4617-b30a-90ac9ac22825"). InnerVolumeSpecName "kube-api-access-knvb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.329508 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "162f3c9a-6a5c-4617-b30a-90ac9ac22825" (UID: "162f3c9a-6a5c-4617-b30a-90ac9ac22825"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.329644 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-scripts" (OuterVolumeSpecName: "scripts") pod "162f3c9a-6a5c-4617-b30a-90ac9ac22825" (UID: "162f3c9a-6a5c-4617-b30a-90ac9ac22825"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.363395 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "162f3c9a-6a5c-4617-b30a-90ac9ac22825" (UID: "162f3c9a-6a5c-4617-b30a-90ac9ac22825"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.369871 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.383887 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-config-data" (OuterVolumeSpecName: "config-data") pod "162f3c9a-6a5c-4617-b30a-90ac9ac22825" (UID: "162f3c9a-6a5c-4617-b30a-90ac9ac22825"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.421781 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knvb6\" (UniqueName: \"kubernetes.io/projected/162f3c9a-6a5c-4617-b30a-90ac9ac22825-kube-api-access-knvb6\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.422123 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.422136 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.422149 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.422161 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/162f3c9a-6a5c-4617-b30a-90ac9ac22825-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.422171 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/162f3c9a-6a5c-4617-b30a-90ac9ac22825-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.536170 4768 generic.go:334] "Generic (PLEG): container finished" podID="162f3c9a-6a5c-4617-b30a-90ac9ac22825" containerID="ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c" exitCode=0 Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.536210 4768 generic.go:334] "Generic (PLEG): container finished" podID="162f3c9a-6a5c-4617-b30a-90ac9ac22825" containerID="76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e" exitCode=143 Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.536239 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.536240 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"162f3c9a-6a5c-4617-b30a-90ac9ac22825","Type":"ContainerDied","Data":"ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c"} Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.536407 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"162f3c9a-6a5c-4617-b30a-90ac9ac22825","Type":"ContainerDied","Data":"76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e"} Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.536418 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"162f3c9a-6a5c-4617-b30a-90ac9ac22825","Type":"ContainerDied","Data":"ad76f411ca3ce3313b097656bb548abb4f77a86653fc562ff372c40ee60ead0b"} Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.536439 4768 scope.go:117] "RemoveContainer" containerID="ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.573108 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.581523 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.599993 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 24 18:43:59 crc kubenswrapper[4768]: E1124 18:43:59.600402 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="162f3c9a-6a5c-4617-b30a-90ac9ac22825" containerName="manila-api" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.600415 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="162f3c9a-6a5c-4617-b30a-90ac9ac22825" containerName="manila-api" Nov 24 18:43:59 crc kubenswrapper[4768]: E1124 18:43:59.600432 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="162f3c9a-6a5c-4617-b30a-90ac9ac22825" containerName="manila-api-log" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.600438 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="162f3c9a-6a5c-4617-b30a-90ac9ac22825" containerName="manila-api-log" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.600646 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="162f3c9a-6a5c-4617-b30a-90ac9ac22825" containerName="manila-api" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.600670 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="162f3c9a-6a5c-4617-b30a-90ac9ac22825" containerName="manila-api-log" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.602203 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.604365 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.605116 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.605283 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.644470 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.644779 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="ceilometer-central-agent" containerID="cri-o://81d6f4cf9f89c103d1c185e18d10d08ef9ded5976903ffd1d906d8bfc349b5ef" gracePeriod=30 Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.645208 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="proxy-httpd" containerID="cri-o://6ce3d787f405cb55f2496ac50e073ef9076246a1732c5891e4805da40731dea6" gracePeriod=30 Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.645262 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="sg-core" containerID="cri-o://e69bae1c93e3efacb4eb74e45dc2663d4eebf35b375821c2f9cd6d5f63a9854e" gracePeriod=30 Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.645298 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="ceilometer-notification-agent" containerID="cri-o://c53e0604152b2d13e447c6824d9047eff5af845863468352f4a02e9e69565251" gracePeriod=30 Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.676071 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.732680 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-public-tls-certs\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.732738 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-config-data-custom\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.732768 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f30f2c98-4600-4324-b983-59a519225520-etc-machine-id\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.732783 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-config-data\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.732812 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzvvc\" (UniqueName: \"kubernetes.io/projected/f30f2c98-4600-4324-b983-59a519225520-kube-api-access-kzvvc\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.732898 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-scripts\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.732920 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30f2c98-4600-4324-b983-59a519225520-logs\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.732971 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.732996 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.834904 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.835008 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-public-tls-certs\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.835029 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-config-data-custom\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.835052 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f30f2c98-4600-4324-b983-59a519225520-etc-machine-id\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.835072 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-config-data\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.835107 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzvvc\" (UniqueName: \"kubernetes.io/projected/f30f2c98-4600-4324-b983-59a519225520-kube-api-access-kzvvc\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.835189 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-scripts\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.835211 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30f2c98-4600-4324-b983-59a519225520-logs\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.835260 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.837048 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f30f2c98-4600-4324-b983-59a519225520-etc-machine-id\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.843131 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.844118 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.846468 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-config-data\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.846872 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-scripts\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.848360 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30f2c98-4600-4324-b983-59a519225520-logs\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.850213 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-public-tls-certs\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.854066 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f30f2c98-4600-4324-b983-59a519225520-config-data-custom\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.855068 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzvvc\" (UniqueName: \"kubernetes.io/projected/f30f2c98-4600-4324-b983-59a519225520-kube-api-access-kzvvc\") pod \"manila-api-0\" (UID: \"f30f2c98-4600-4324-b983-59a519225520\") " pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.916612 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 18:43:59 crc kubenswrapper[4768]: I1124 18:43:59.919797 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="162f3c9a-6a5c-4617-b30a-90ac9ac22825" path="/var/lib/kubelet/pods/162f3c9a-6a5c-4617-b30a-90ac9ac22825/volumes" Nov 24 18:44:00 crc kubenswrapper[4768]: I1124 18:44:00.557125 4768 generic.go:334] "Generic (PLEG): container finished" podID="a1fee949-0151-40ec-9c6e-1554e2279306" containerID="6ce3d787f405cb55f2496ac50e073ef9076246a1732c5891e4805da40731dea6" exitCode=0 Nov 24 18:44:00 crc kubenswrapper[4768]: I1124 18:44:00.557462 4768 generic.go:334] "Generic (PLEG): container finished" podID="a1fee949-0151-40ec-9c6e-1554e2279306" containerID="e69bae1c93e3efacb4eb74e45dc2663d4eebf35b375821c2f9cd6d5f63a9854e" exitCode=2 Nov 24 18:44:00 crc kubenswrapper[4768]: I1124 18:44:00.557472 4768 generic.go:334] "Generic (PLEG): container finished" podID="a1fee949-0151-40ec-9c6e-1554e2279306" containerID="81d6f4cf9f89c103d1c185e18d10d08ef9ded5976903ffd1d906d8bfc349b5ef" exitCode=0 Nov 24 18:44:00 crc kubenswrapper[4768]: I1124 18:44:00.557204 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1fee949-0151-40ec-9c6e-1554e2279306","Type":"ContainerDied","Data":"6ce3d787f405cb55f2496ac50e073ef9076246a1732c5891e4805da40731dea6"} Nov 24 18:44:00 crc kubenswrapper[4768]: I1124 18:44:00.557535 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1fee949-0151-40ec-9c6e-1554e2279306","Type":"ContainerDied","Data":"e69bae1c93e3efacb4eb74e45dc2663d4eebf35b375821c2f9cd6d5f63a9854e"} Nov 24 18:44:00 crc kubenswrapper[4768]: I1124 18:44:00.557550 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1fee949-0151-40ec-9c6e-1554e2279306","Type":"ContainerDied","Data":"81d6f4cf9f89c103d1c185e18d10d08ef9ded5976903ffd1d906d8bfc349b5ef"} Nov 24 18:44:01 crc kubenswrapper[4768]: I1124 18:44:01.784734 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:44:01 crc kubenswrapper[4768]: I1124 18:44:01.792415 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-85f468447b-zhvc8" Nov 24 18:44:01 crc kubenswrapper[4768]: I1124 18:44:01.912020 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-685ddbdf68-6mjzl"] Nov 24 18:44:02 crc kubenswrapper[4768]: I1124 18:44:02.578356 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-685ddbdf68-6mjzl" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerName="horizon" containerID="cri-o://9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0" gracePeriod=30 Nov 24 18:44:02 crc kubenswrapper[4768]: I1124 18:44:02.578834 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-685ddbdf68-6mjzl" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerName="horizon-log" containerID="cri-o://dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f" gracePeriod=30 Nov 24 18:44:03 crc kubenswrapper[4768]: I1124 18:44:03.041638 4768 scope.go:117] "RemoveContainer" containerID="76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e" Nov 24 18:44:03 crc kubenswrapper[4768]: I1124 18:44:03.281766 4768 scope.go:117] "RemoveContainer" containerID="ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c" Nov 24 18:44:03 crc kubenswrapper[4768]: E1124 18:44:03.283024 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c\": container with ID starting with ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c not found: ID does not exist" containerID="ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c" Nov 24 18:44:03 crc kubenswrapper[4768]: I1124 18:44:03.283083 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c"} err="failed to get container status \"ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c\": rpc error: code = NotFound desc = could not find container \"ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c\": container with ID starting with ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c not found: ID does not exist" Nov 24 18:44:03 crc kubenswrapper[4768]: I1124 18:44:03.283115 4768 scope.go:117] "RemoveContainer" containerID="76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e" Nov 24 18:44:03 crc kubenswrapper[4768]: E1124 18:44:03.283603 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e\": container with ID starting with 76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e not found: ID does not exist" containerID="76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e" Nov 24 18:44:03 crc kubenswrapper[4768]: I1124 18:44:03.283652 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e"} err="failed to get container status \"76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e\": rpc error: code = NotFound desc = could not find container \"76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e\": container with ID starting with 76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e not found: ID does not exist" Nov 24 18:44:03 crc kubenswrapper[4768]: I1124 18:44:03.283683 4768 scope.go:117] "RemoveContainer" containerID="ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c" Nov 24 18:44:03 crc kubenswrapper[4768]: I1124 18:44:03.283895 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c"} err="failed to get container status \"ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c\": rpc error: code = NotFound desc = could not find container \"ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c\": container with ID starting with ffbc8d9f167494133d55183b0e12ea9dd9fc1d2b4e6c7cf0750a57cb3cc9983c not found: ID does not exist" Nov 24 18:44:03 crc kubenswrapper[4768]: I1124 18:44:03.283924 4768 scope.go:117] "RemoveContainer" containerID="76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e" Nov 24 18:44:03 crc kubenswrapper[4768]: I1124 18:44:03.284283 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e"} err="failed to get container status \"76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e\": rpc error: code = NotFound desc = could not find container \"76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e\": container with ID starting with 76652d40cd415b80d9fc46da4d32aa7faa0445769285cdb1e79002078e42a51e not found: ID does not exist" Nov 24 18:44:03 crc kubenswrapper[4768]: I1124 18:44:03.607135 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 24 18:44:03 crc kubenswrapper[4768]: W1124 18:44:03.609592 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf30f2c98_4600_4324_b983_59a519225520.slice/crio-b0adfcb28eb9b11e5cc6030951e8e60f7f2033f87cf598124d072a3402181da0 WatchSource:0}: Error finding container b0adfcb28eb9b11e5cc6030951e8e60f7f2033f87cf598124d072a3402181da0: Status 404 returned error can't find the container with id b0adfcb28eb9b11e5cc6030951e8e60f7f2033f87cf598124d072a3402181da0 Nov 24 18:44:04 crc kubenswrapper[4768]: I1124 18:44:04.107967 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 24 18:44:04 crc kubenswrapper[4768]: I1124 18:44:04.248667 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-l8v2f" Nov 24 18:44:04 crc kubenswrapper[4768]: I1124 18:44:04.331632 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-kg8vc"] Nov 24 18:44:04 crc kubenswrapper[4768]: I1124 18:44:04.331900 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" podUID="e0edf9a4-37b3-4519-84ca-2c4fce4c0808" containerName="dnsmasq-dns" containerID="cri-o://58361ec7b1e7b265454a61e5ae93f7eea8623cfa5e0e8beba17f76dddc8663d4" gracePeriod=10 Nov 24 18:44:04 crc kubenswrapper[4768]: I1124 18:44:04.615411 4768 generic.go:334] "Generic (PLEG): container finished" podID="e0edf9a4-37b3-4519-84ca-2c4fce4c0808" containerID="58361ec7b1e7b265454a61e5ae93f7eea8623cfa5e0e8beba17f76dddc8663d4" exitCode=0 Nov 24 18:44:04 crc kubenswrapper[4768]: I1124 18:44:04.615975 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" event={"ID":"e0edf9a4-37b3-4519-84ca-2c4fce4c0808","Type":"ContainerDied","Data":"58361ec7b1e7b265454a61e5ae93f7eea8623cfa5e0e8beba17f76dddc8663d4"} Nov 24 18:44:04 crc kubenswrapper[4768]: I1124 18:44:04.620268 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9066febc-fa33-4d85-954b-5533708e7e9d","Type":"ContainerStarted","Data":"4365912c8ccf374969fc839a7bbbb4eef2abdb0b2cdbbbae1de1d79ceaf00d7e"} Nov 24 18:44:04 crc kubenswrapper[4768]: I1124 18:44:04.620327 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9066febc-fa33-4d85-954b-5533708e7e9d","Type":"ContainerStarted","Data":"4c85d97719047fb6105ee424a052ad3b0b1d3c76f580071a39ff834dcfcdb5df"} Nov 24 18:44:04 crc kubenswrapper[4768]: I1124 18:44:04.624719 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f30f2c98-4600-4324-b983-59a519225520","Type":"ContainerStarted","Data":"e14d7fe3e0e6c8d18e87d003fa9ffa11959277a21b3d73325fdbd2f2d4df3a64"} Nov 24 18:44:04 crc kubenswrapper[4768]: I1124 18:44:04.624871 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f30f2c98-4600-4324-b983-59a519225520","Type":"ContainerStarted","Data":"b0adfcb28eb9b11e5cc6030951e8e60f7f2033f87cf598124d072a3402181da0"} Nov 24 18:44:04 crc kubenswrapper[4768]: I1124 18:44:04.652373 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.7828590699999998 podStartE2EDuration="11.652350618s" podCreationTimestamp="2025-11-24 18:43:53 +0000 UTC" firstStartedPulling="2025-11-24 18:43:55.179377057 +0000 UTC m=+3274.039958834" lastFinishedPulling="2025-11-24 18:44:03.048868605 +0000 UTC m=+3281.909450382" observedRunningTime="2025-11-24 18:44:04.645132643 +0000 UTC m=+3283.505714420" watchObservedRunningTime="2025-11-24 18:44:04.652350618 +0000 UTC m=+3283.512932385" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.795302 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.845343 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-openstack-edpm-ipam\") pod \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.845406 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7z26\" (UniqueName: \"kubernetes.io/projected/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-kube-api-access-f7z26\") pod \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.845514 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-ovsdbserver-sb\") pod \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.845604 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-dns-svc\") pod \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.845677 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-ovsdbserver-nb\") pod \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.845767 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-config\") pod \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\" (UID: \"e0edf9a4-37b3-4519-84ca-2c4fce4c0808\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.853710 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-kube-api-access-f7z26" (OuterVolumeSpecName: "kube-api-access-f7z26") pod "e0edf9a4-37b3-4519-84ca-2c4fce4c0808" (UID: "e0edf9a4-37b3-4519-84ca-2c4fce4c0808"). InnerVolumeSpecName "kube-api-access-f7z26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.899278 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e0edf9a4-37b3-4519-84ca-2c4fce4c0808" (UID: "e0edf9a4-37b3-4519-84ca-2c4fce4c0808"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.904878 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "e0edf9a4-37b3-4519-84ca-2c4fce4c0808" (UID: "e0edf9a4-37b3-4519-84ca-2c4fce4c0808"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.908989 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0edf9a4-37b3-4519-84ca-2c4fce4c0808" (UID: "e0edf9a4-37b3-4519-84ca-2c4fce4c0808"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.910436 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-config" (OuterVolumeSpecName: "config") pod "e0edf9a4-37b3-4519-84ca-2c4fce4c0808" (UID: "e0edf9a4-37b3-4519-84ca-2c4fce4c0808"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.916664 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0edf9a4-37b3-4519-84ca-2c4fce4c0808" (UID: "e0edf9a4-37b3-4519-84ca-2c4fce4c0808"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.979310 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.979352 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.979363 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.979376 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.979391 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:04.979405 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7z26\" (UniqueName: \"kubernetes.io/projected/e0edf9a4-37b3-4519-84ca-2c4fce4c0808-kube-api-access-f7z26\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.594781 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.599062 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.637001 4768 generic.go:334] "Generic (PLEG): container finished" podID="bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" containerID="db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c" exitCode=137 Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.637047 4768 generic.go:334] "Generic (PLEG): container finished" podID="bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" containerID="b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7" exitCode=137 Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.637058 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cd66787c-cg7lk" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.637078 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cd66787c-cg7lk" event={"ID":"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76","Type":"ContainerDied","Data":"db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c"} Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.637128 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cd66787c-cg7lk" event={"ID":"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76","Type":"ContainerDied","Data":"b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7"} Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.637141 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cd66787c-cg7lk" event={"ID":"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76","Type":"ContainerDied","Data":"5b8b8c989663fb09c433989c99a1afb3bf185c7a795944311a257c21334ea26e"} Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.637159 4768 scope.go:117] "RemoveContainer" containerID="db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.640946 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.642417 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-kg8vc" event={"ID":"e0edf9a4-37b3-4519-84ca-2c4fce4c0808","Type":"ContainerDied","Data":"8b4f3d1acf54158bb92e31855ac9cc7e545fada8f2362bb8483fc0eb1c835aba"} Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.646367 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f30f2c98-4600-4324-b983-59a519225520","Type":"ContainerStarted","Data":"2dff366f71b865543e8cd15bd33b6891c5afd655293fb23ac1a7a625b7ce4a83"} Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.646560 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.654554 4768 generic.go:334] "Generic (PLEG): container finished" podID="4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" containerID="09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c" exitCode=137 Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.654748 4768 generic.go:334] "Generic (PLEG): container finished" podID="4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" containerID="1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc" exitCode=137 Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.655087 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-677bdf55b9-f4t6m" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.655693 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-677bdf55b9-f4t6m" event={"ID":"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd","Type":"ContainerDied","Data":"09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c"} Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.655723 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-677bdf55b9-f4t6m" event={"ID":"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd","Type":"ContainerDied","Data":"1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc"} Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.655734 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-677bdf55b9-f4t6m" event={"ID":"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd","Type":"ContainerDied","Data":"9c7df066de523da8a06a68e26a7cef2d34029cbab4d7909a10f303bd192d9549"} Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.689717 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=6.68968771 podStartE2EDuration="6.68968771s" podCreationTimestamp="2025-11-24 18:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:44:05.673601466 +0000 UTC m=+3284.534183243" watchObservedRunningTime="2025-11-24 18:44:05.68968771 +0000 UTC m=+3284.550269487" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.698138 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-logs\") pod \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.698265 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d596t\" (UniqueName: \"kubernetes.io/projected/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-kube-api-access-d596t\") pod \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.698377 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-scripts\") pod \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.700573 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-logs" (OuterVolumeSpecName: "logs") pod "4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" (UID: "4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.700827 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-horizon-secret-key\") pod \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.700897 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-config-data\") pod \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.700981 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-config-data\") pod \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.701085 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-logs\") pod \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.701150 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2lzs\" (UniqueName: \"kubernetes.io/projected/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-kube-api-access-d2lzs\") pod \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.701190 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-scripts\") pod \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\" (UID: \"bd19ff26-97cb-4d1e-a9ae-ecd4867ada76\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.701213 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-horizon-secret-key\") pod \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\" (UID: \"4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd\") " Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.701907 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-logs" (OuterVolumeSpecName: "logs") pod "bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" (UID: "bd19ff26-97cb-4d1e-a9ae-ecd4867ada76"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.704729 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.704755 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.705922 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-kube-api-access-d596t" (OuterVolumeSpecName: "kube-api-access-d596t") pod "4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" (UID: "4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd"). InnerVolumeSpecName "kube-api-access-d596t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.708287 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-kube-api-access-d2lzs" (OuterVolumeSpecName: "kube-api-access-d2lzs") pod "bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" (UID: "bd19ff26-97cb-4d1e-a9ae-ecd4867ada76"). InnerVolumeSpecName "kube-api-access-d2lzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.708167 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" (UID: "bd19ff26-97cb-4d1e-a9ae-ecd4867ada76"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.708685 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" (UID: "4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.712137 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-kg8vc"] Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.723765 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-kg8vc"] Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.734013 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-scripts" (OuterVolumeSpecName: "scripts") pod "4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" (UID: "4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.735018 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-config-data" (OuterVolumeSpecName: "config-data") pod "4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" (UID: "4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.735051 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-config-data" (OuterVolumeSpecName: "config-data") pod "bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" (UID: "bd19ff26-97cb-4d1e-a9ae-ecd4867ada76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.735686 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-scripts" (OuterVolumeSpecName: "scripts") pod "bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" (UID: "bd19ff26-97cb-4d1e-a9ae-ecd4867ada76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.806658 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d596t\" (UniqueName: \"kubernetes.io/projected/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-kube-api-access-d596t\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.806697 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.806708 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.806719 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.806731 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.806743 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2lzs\" (UniqueName: \"kubernetes.io/projected/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-kube-api-access-d2lzs\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.806757 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.806806 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.837161 4768 scope.go:117] "RemoveContainer" containerID="b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.876982 4768 scope.go:117] "RemoveContainer" containerID="db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c" Nov 24 18:44:05 crc kubenswrapper[4768]: E1124 18:44:05.878161 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c\": container with ID starting with db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c not found: ID does not exist" containerID="db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.878224 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c"} err="failed to get container status \"db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c\": rpc error: code = NotFound desc = could not find container \"db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c\": container with ID starting with db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c not found: ID does not exist" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.878267 4768 scope.go:117] "RemoveContainer" containerID="b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7" Nov 24 18:44:05 crc kubenswrapper[4768]: E1124 18:44:05.878740 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7\": container with ID starting with b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7 not found: ID does not exist" containerID="b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.878808 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7"} err="failed to get container status \"b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7\": rpc error: code = NotFound desc = could not find container \"b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7\": container with ID starting with b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7 not found: ID does not exist" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.878829 4768 scope.go:117] "RemoveContainer" containerID="db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.879472 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c"} err="failed to get container status \"db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c\": rpc error: code = NotFound desc = could not find container \"db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c\": container with ID starting with db5a623658108646ef2afcc67b8674dc0de2bf2f4bce929732ce5409bcfad05c not found: ID does not exist" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.879511 4768 scope.go:117] "RemoveContainer" containerID="b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.879903 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7"} err="failed to get container status \"b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7\": rpc error: code = NotFound desc = could not find container \"b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7\": container with ID starting with b23139624462546f964fa1f1d7015cd513f9268add82271b608c722bc0f5abf7 not found: ID does not exist" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.879964 4768 scope.go:117] "RemoveContainer" containerID="58361ec7b1e7b265454a61e5ae93f7eea8623cfa5e0e8beba17f76dddc8663d4" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.912623 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0edf9a4-37b3-4519-84ca-2c4fce4c0808" path="/var/lib/kubelet/pods/e0edf9a4-37b3-4519-84ca-2c4fce4c0808/volumes" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.913367 4768 scope.go:117] "RemoveContainer" containerID="6c1703fe1fa7e0cc4999a5c66fc2c4b1cde37a276233e6f3e65db73b6c319a29" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.975007 4768 scope.go:117] "RemoveContainer" containerID="09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c" Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.981340 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cd66787c-cg7lk"] Nov 24 18:44:05 crc kubenswrapper[4768]: I1124 18:44:05.990476 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5cd66787c-cg7lk"] Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.054421 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-677bdf55b9-f4t6m"] Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.064863 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-677bdf55b9-f4t6m"] Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.203549 4768 scope.go:117] "RemoveContainer" containerID="1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.225046 4768 scope.go:117] "RemoveContainer" containerID="09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c" Nov 24 18:44:06 crc kubenswrapper[4768]: E1124 18:44:06.225674 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c\": container with ID starting with 09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c not found: ID does not exist" containerID="09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.225712 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c"} err="failed to get container status \"09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c\": rpc error: code = NotFound desc = could not find container \"09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c\": container with ID starting with 09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c not found: ID does not exist" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.225740 4768 scope.go:117] "RemoveContainer" containerID="1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc" Nov 24 18:44:06 crc kubenswrapper[4768]: E1124 18:44:06.226265 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc\": container with ID starting with 1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc not found: ID does not exist" containerID="1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.226313 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc"} err="failed to get container status \"1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc\": rpc error: code = NotFound desc = could not find container \"1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc\": container with ID starting with 1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc not found: ID does not exist" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.226343 4768 scope.go:117] "RemoveContainer" containerID="09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.226707 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c"} err="failed to get container status \"09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c\": rpc error: code = NotFound desc = could not find container \"09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c\": container with ID starting with 09362bdffc0eb578caa4a661a8a6263177f73dcd375460510fe56d668ee7426c not found: ID does not exist" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.226728 4768 scope.go:117] "RemoveContainer" containerID="1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.227137 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc"} err="failed to get container status \"1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc\": rpc error: code = NotFound desc = could not find container \"1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc\": container with ID starting with 1f29dcf7afc8e19d8d0f6063057327c7542563f7a8712fd420787d28a569ebfc not found: ID does not exist" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.673936 4768 generic.go:334] "Generic (PLEG): container finished" podID="a1fee949-0151-40ec-9c6e-1554e2279306" containerID="c53e0604152b2d13e447c6824d9047eff5af845863468352f4a02e9e69565251" exitCode=0 Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.674456 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1fee949-0151-40ec-9c6e-1554e2279306","Type":"ContainerDied","Data":"c53e0604152b2d13e447c6824d9047eff5af845863468352f4a02e9e69565251"} Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.674532 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1fee949-0151-40ec-9c6e-1554e2279306","Type":"ContainerDied","Data":"d04483dc4f8b6d8a72be605e0beeed60423e07595c01de20780a8a47448ac924"} Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.674549 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d04483dc4f8b6d8a72be605e0beeed60423e07595c01de20780a8a47448ac924" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.677970 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.679787 4768 generic.go:334] "Generic (PLEG): container finished" podID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerID="9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0" exitCode=0 Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.679848 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-685ddbdf68-6mjzl" event={"ID":"375f8ae8-797c-40c7-bd90-93b3538ff9aa","Type":"ContainerDied","Data":"9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0"} Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.833064 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-ceilometer-tls-certs\") pod \"a1fee949-0151-40ec-9c6e-1554e2279306\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.833189 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-combined-ca-bundle\") pod \"a1fee949-0151-40ec-9c6e-1554e2279306\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.833282 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smfbb\" (UniqueName: \"kubernetes.io/projected/a1fee949-0151-40ec-9c6e-1554e2279306-kube-api-access-smfbb\") pod \"a1fee949-0151-40ec-9c6e-1554e2279306\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.833314 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-sg-core-conf-yaml\") pod \"a1fee949-0151-40ec-9c6e-1554e2279306\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.833391 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-config-data\") pod \"a1fee949-0151-40ec-9c6e-1554e2279306\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.833479 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-scripts\") pod \"a1fee949-0151-40ec-9c6e-1554e2279306\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.833546 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1fee949-0151-40ec-9c6e-1554e2279306-log-httpd\") pod \"a1fee949-0151-40ec-9c6e-1554e2279306\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.833573 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1fee949-0151-40ec-9c6e-1554e2279306-run-httpd\") pod \"a1fee949-0151-40ec-9c6e-1554e2279306\" (UID: \"a1fee949-0151-40ec-9c6e-1554e2279306\") " Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.836133 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1fee949-0151-40ec-9c6e-1554e2279306-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a1fee949-0151-40ec-9c6e-1554e2279306" (UID: "a1fee949-0151-40ec-9c6e-1554e2279306"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.838104 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1fee949-0151-40ec-9c6e-1554e2279306-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a1fee949-0151-40ec-9c6e-1554e2279306" (UID: "a1fee949-0151-40ec-9c6e-1554e2279306"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.843599 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1fee949-0151-40ec-9c6e-1554e2279306-kube-api-access-smfbb" (OuterVolumeSpecName: "kube-api-access-smfbb") pod "a1fee949-0151-40ec-9c6e-1554e2279306" (UID: "a1fee949-0151-40ec-9c6e-1554e2279306"). InnerVolumeSpecName "kube-api-access-smfbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.846184 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-scripts" (OuterVolumeSpecName: "scripts") pod "a1fee949-0151-40ec-9c6e-1554e2279306" (UID: "a1fee949-0151-40ec-9c6e-1554e2279306"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.871741 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a1fee949-0151-40ec-9c6e-1554e2279306" (UID: "a1fee949-0151-40ec-9c6e-1554e2279306"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.896615 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a1fee949-0151-40ec-9c6e-1554e2279306" (UID: "a1fee949-0151-40ec-9c6e-1554e2279306"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.936002 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smfbb\" (UniqueName: \"kubernetes.io/projected/a1fee949-0151-40ec-9c6e-1554e2279306-kube-api-access-smfbb\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.936461 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.936536 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.936590 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1fee949-0151-40ec-9c6e-1554e2279306-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.936639 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1fee949-0151-40ec-9c6e-1554e2279306-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.936713 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.943056 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1fee949-0151-40ec-9c6e-1554e2279306" (UID: "a1fee949-0151-40ec-9c6e-1554e2279306"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:06 crc kubenswrapper[4768]: I1124 18:44:06.969176 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-config-data" (OuterVolumeSpecName: "config-data") pod "a1fee949-0151-40ec-9c6e-1554e2279306" (UID: "a1fee949-0151-40ec-9c6e-1554e2279306"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.039071 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.039103 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fee949-0151-40ec-9c6e-1554e2279306-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.192699 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-685ddbdf68-6mjzl" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.689053 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.727442 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.736878 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.750870 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:44:07 crc kubenswrapper[4768]: E1124 18:44:07.751290 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" containerName="horizon" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751306 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" containerName="horizon" Nov 24 18:44:07 crc kubenswrapper[4768]: E1124 18:44:07.751322 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0edf9a4-37b3-4519-84ca-2c4fce4c0808" containerName="dnsmasq-dns" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751329 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0edf9a4-37b3-4519-84ca-2c4fce4c0808" containerName="dnsmasq-dns" Nov 24 18:44:07 crc kubenswrapper[4768]: E1124 18:44:07.751338 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="sg-core" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751344 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="sg-core" Nov 24 18:44:07 crc kubenswrapper[4768]: E1124 18:44:07.751367 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="ceilometer-notification-agent" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751373 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="ceilometer-notification-agent" Nov 24 18:44:07 crc kubenswrapper[4768]: E1124 18:44:07.751388 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="ceilometer-central-agent" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751396 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="ceilometer-central-agent" Nov 24 18:44:07 crc kubenswrapper[4768]: E1124 18:44:07.751414 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" containerName="horizon" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751420 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" containerName="horizon" Nov 24 18:44:07 crc kubenswrapper[4768]: E1124 18:44:07.751437 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="proxy-httpd" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751442 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="proxy-httpd" Nov 24 18:44:07 crc kubenswrapper[4768]: E1124 18:44:07.751456 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0edf9a4-37b3-4519-84ca-2c4fce4c0808" containerName="init" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751461 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0edf9a4-37b3-4519-84ca-2c4fce4c0808" containerName="init" Nov 24 18:44:07 crc kubenswrapper[4768]: E1124 18:44:07.751470 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" containerName="horizon-log" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751476 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" containerName="horizon-log" Nov 24 18:44:07 crc kubenswrapper[4768]: E1124 18:44:07.751497 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" containerName="horizon-log" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751550 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" containerName="horizon-log" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751740 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" containerName="horizon-log" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751753 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="ceilometer-notification-agent" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751769 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="sg-core" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751780 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" containerName="horizon" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751789 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="ceilometer-central-agent" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751798 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" containerName="horizon" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751807 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" containerName="proxy-httpd" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751821 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" containerName="horizon-log" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.751828 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0edf9a4-37b3-4519-84ca-2c4fce4c0808" containerName="dnsmasq-dns" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.753513 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.758999 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.759381 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.760442 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.771681 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.872028 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-config-data\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.872344 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7cwc\" (UniqueName: \"kubernetes.io/projected/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-kube-api-access-g7cwc\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.872440 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-log-httpd\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.872552 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-run-httpd\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.872653 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.872773 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.872896 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-scripts\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.872976 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.911700 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd" path="/var/lib/kubelet/pods/4e7d6092-94ac-4b23-9cfa-c3ede78e1dbd/volumes" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.912732 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1fee949-0151-40ec-9c6e-1554e2279306" path="/var/lib/kubelet/pods/a1fee949-0151-40ec-9c6e-1554e2279306/volumes" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.914730 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd19ff26-97cb-4d1e-a9ae-ecd4867ada76" path="/var/lib/kubelet/pods/bd19ff26-97cb-4d1e-a9ae-ecd4867ada76/volumes" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.975127 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-config-data\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.975251 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7cwc\" (UniqueName: \"kubernetes.io/projected/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-kube-api-access-g7cwc\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.975310 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-log-httpd\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.975384 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-run-httpd\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.975505 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.976127 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-log-httpd\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.976133 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-run-httpd\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.976688 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.976943 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-scripts\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.977009 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.980471 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-scripts\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.980545 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-config-data\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.981019 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.981137 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:07 crc kubenswrapper[4768]: I1124 18:44:07.983856 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:08 crc kubenswrapper[4768]: I1124 18:44:08.006673 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7cwc\" (UniqueName: \"kubernetes.io/projected/81427e5e-c0e8-4445-8a60-2b5dcdcf9a52-kube-api-access-g7cwc\") pod \"ceilometer-0\" (UID: \"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52\") " pod="openstack/ceilometer-0" Nov 24 18:44:08 crc kubenswrapper[4768]: I1124 18:44:08.072266 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 18:44:08 crc kubenswrapper[4768]: I1124 18:44:08.565943 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 18:44:08 crc kubenswrapper[4768]: I1124 18:44:08.701451 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52","Type":"ContainerStarted","Data":"5f6b3ab239bdf1fc01915bd03f33c15f1bc056081b3bfdab82a4ebf5342fdf73"} Nov 24 18:44:09 crc kubenswrapper[4768]: I1124 18:44:09.713109 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52","Type":"ContainerStarted","Data":"e6ef113ca1f7f7f54a6f77ee5f110797edd792686445cd84f89e3faa93bbc876"} Nov 24 18:44:10 crc kubenswrapper[4768]: I1124 18:44:10.728205 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52","Type":"ContainerStarted","Data":"8a9fd7a8a4ffa54e3f61b049179701375f28a92fe7e8576e2ee1e190374f893d"} Nov 24 18:44:10 crc kubenswrapper[4768]: I1124 18:44:10.728594 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52","Type":"ContainerStarted","Data":"f7c7579d46741bede9aed06c23a3efa50963de8e2a42bec4f8487261ebcda575"} Nov 24 18:44:12 crc kubenswrapper[4768]: I1124 18:44:12.765473 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81427e5e-c0e8-4445-8a60-2b5dcdcf9a52","Type":"ContainerStarted","Data":"8efdf23f7a45a32f264e0a9099a0e3958ebe418a8779f38afb5d25a9af2ae78c"} Nov 24 18:44:12 crc kubenswrapper[4768]: I1124 18:44:12.766758 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 18:44:12 crc kubenswrapper[4768]: I1124 18:44:12.806541 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.471372818 podStartE2EDuration="5.806496768s" podCreationTimestamp="2025-11-24 18:44:07 +0000 UTC" firstStartedPulling="2025-11-24 18:44:08.572386795 +0000 UTC m=+3287.432968572" lastFinishedPulling="2025-11-24 18:44:11.907510745 +0000 UTC m=+3290.768092522" observedRunningTime="2025-11-24 18:44:12.802145042 +0000 UTC m=+3291.662726819" watchObservedRunningTime="2025-11-24 18:44:12.806496768 +0000 UTC m=+3291.667078555" Nov 24 18:44:14 crc kubenswrapper[4768]: I1124 18:44:14.175394 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 24 18:44:15 crc kubenswrapper[4768]: I1124 18:44:15.600660 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 24 18:44:15 crc kubenswrapper[4768]: I1124 18:44:15.643151 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 24 18:44:15 crc kubenswrapper[4768]: I1124 18:44:15.692068 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 18:44:15 crc kubenswrapper[4768]: I1124 18:44:15.721434 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 18:44:15 crc kubenswrapper[4768]: I1124 18:44:15.803262 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="f2833042-f8cd-458f-b1e9-dd1998838efd" containerName="manila-scheduler" containerID="cri-o://6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c" gracePeriod=30 Nov 24 18:44:15 crc kubenswrapper[4768]: I1124 18:44:15.803367 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="f2833042-f8cd-458f-b1e9-dd1998838efd" containerName="probe" containerID="cri-o://bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8" gracePeriod=30 Nov 24 18:44:15 crc kubenswrapper[4768]: I1124 18:44:15.803431 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="9066febc-fa33-4d85-954b-5533708e7e9d" containerName="manila-share" containerID="cri-o://4c85d97719047fb6105ee424a052ad3b0b1d3c76f580071a39ff834dcfcdb5df" gracePeriod=30 Nov 24 18:44:15 crc kubenswrapper[4768]: I1124 18:44:15.803584 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="9066febc-fa33-4d85-954b-5533708e7e9d" containerName="probe" containerID="cri-o://4365912c8ccf374969fc839a7bbbb4eef2abdb0b2cdbbbae1de1d79ceaf00d7e" gracePeriod=30 Nov 24 18:44:16 crc kubenswrapper[4768]: I1124 18:44:16.817159 4768 generic.go:334] "Generic (PLEG): container finished" podID="9066febc-fa33-4d85-954b-5533708e7e9d" containerID="4365912c8ccf374969fc839a7bbbb4eef2abdb0b2cdbbbae1de1d79ceaf00d7e" exitCode=0 Nov 24 18:44:16 crc kubenswrapper[4768]: I1124 18:44:16.819438 4768 generic.go:334] "Generic (PLEG): container finished" podID="9066febc-fa33-4d85-954b-5533708e7e9d" containerID="4c85d97719047fb6105ee424a052ad3b0b1d3c76f580071a39ff834dcfcdb5df" exitCode=1 Nov 24 18:44:16 crc kubenswrapper[4768]: I1124 18:44:16.817348 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9066febc-fa33-4d85-954b-5533708e7e9d","Type":"ContainerDied","Data":"4365912c8ccf374969fc839a7bbbb4eef2abdb0b2cdbbbae1de1d79ceaf00d7e"} Nov 24 18:44:16 crc kubenswrapper[4768]: I1124 18:44:16.819744 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9066febc-fa33-4d85-954b-5533708e7e9d","Type":"ContainerDied","Data":"4c85d97719047fb6105ee424a052ad3b0b1d3c76f580071a39ff834dcfcdb5df"} Nov 24 18:44:16 crc kubenswrapper[4768]: I1124 18:44:16.819826 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9066febc-fa33-4d85-954b-5533708e7e9d","Type":"ContainerDied","Data":"69e26beac49b82d143cc86abfe3617cf6a742257e66edbccd89c74f8d1178ee4"} Nov 24 18:44:16 crc kubenswrapper[4768]: I1124 18:44:16.819903 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e26beac49b82d143cc86abfe3617cf6a742257e66edbccd89c74f8d1178ee4" Nov 24 18:44:16 crc kubenswrapper[4768]: I1124 18:44:16.823288 4768 generic.go:334] "Generic (PLEG): container finished" podID="f2833042-f8cd-458f-b1e9-dd1998838efd" containerID="bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8" exitCode=0 Nov 24 18:44:16 crc kubenswrapper[4768]: I1124 18:44:16.823338 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f2833042-f8cd-458f-b1e9-dd1998838efd","Type":"ContainerDied","Data":"bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8"} Nov 24 18:44:16 crc kubenswrapper[4768]: I1124 18:44:16.884842 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.008665 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9066febc-fa33-4d85-954b-5533708e7e9d-etc-machine-id\") pod \"9066febc-fa33-4d85-954b-5533708e7e9d\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.008767 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5vqb\" (UniqueName: \"kubernetes.io/projected/9066febc-fa33-4d85-954b-5533708e7e9d-kube-api-access-v5vqb\") pod \"9066febc-fa33-4d85-954b-5533708e7e9d\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.008824 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-scripts\") pod \"9066febc-fa33-4d85-954b-5533708e7e9d\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.008912 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9066febc-fa33-4d85-954b-5533708e7e9d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9066febc-fa33-4d85-954b-5533708e7e9d" (UID: "9066febc-fa33-4d85-954b-5533708e7e9d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.008968 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-config-data\") pod \"9066febc-fa33-4d85-954b-5533708e7e9d\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.009082 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-config-data-custom\") pod \"9066febc-fa33-4d85-954b-5533708e7e9d\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.009157 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-combined-ca-bundle\") pod \"9066febc-fa33-4d85-954b-5533708e7e9d\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.009229 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9066febc-fa33-4d85-954b-5533708e7e9d-var-lib-manila\") pod \"9066febc-fa33-4d85-954b-5533708e7e9d\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.009341 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9066febc-fa33-4d85-954b-5533708e7e9d-ceph\") pod \"9066febc-fa33-4d85-954b-5533708e7e9d\" (UID: \"9066febc-fa33-4d85-954b-5533708e7e9d\") " Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.010157 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9066febc-fa33-4d85-954b-5533708e7e9d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.011974 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9066febc-fa33-4d85-954b-5533708e7e9d-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "9066febc-fa33-4d85-954b-5533708e7e9d" (UID: "9066febc-fa33-4d85-954b-5533708e7e9d"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.017307 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9066febc-fa33-4d85-954b-5533708e7e9d" (UID: "9066febc-fa33-4d85-954b-5533708e7e9d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.017632 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9066febc-fa33-4d85-954b-5533708e7e9d-ceph" (OuterVolumeSpecName: "ceph") pod "9066febc-fa33-4d85-954b-5533708e7e9d" (UID: "9066febc-fa33-4d85-954b-5533708e7e9d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.020326 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-scripts" (OuterVolumeSpecName: "scripts") pod "9066febc-fa33-4d85-954b-5533708e7e9d" (UID: "9066febc-fa33-4d85-954b-5533708e7e9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.020409 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9066febc-fa33-4d85-954b-5533708e7e9d-kube-api-access-v5vqb" (OuterVolumeSpecName: "kube-api-access-v5vqb") pod "9066febc-fa33-4d85-954b-5533708e7e9d" (UID: "9066febc-fa33-4d85-954b-5533708e7e9d"). InnerVolumeSpecName "kube-api-access-v5vqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.084925 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9066febc-fa33-4d85-954b-5533708e7e9d" (UID: "9066febc-fa33-4d85-954b-5533708e7e9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.112022 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.112065 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.112076 4768 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9066febc-fa33-4d85-954b-5533708e7e9d-var-lib-manila\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.112084 4768 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9066febc-fa33-4d85-954b-5533708e7e9d-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.112095 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5vqb\" (UniqueName: \"kubernetes.io/projected/9066febc-fa33-4d85-954b-5533708e7e9d-kube-api-access-v5vqb\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.112105 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.129603 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-config-data" (OuterVolumeSpecName: "config-data") pod "9066febc-fa33-4d85-954b-5533708e7e9d" (UID: "9066febc-fa33-4d85-954b-5533708e7e9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.191847 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-685ddbdf68-6mjzl" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.215221 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9066febc-fa33-4d85-954b-5533708e7e9d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.841627 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.893317 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.942059 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.947745 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 18:44:17 crc kubenswrapper[4768]: E1124 18:44:17.951359 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9066febc-fa33-4d85-954b-5533708e7e9d" containerName="probe" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.951400 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9066febc-fa33-4d85-954b-5533708e7e9d" containerName="probe" Nov 24 18:44:17 crc kubenswrapper[4768]: E1124 18:44:17.951450 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9066febc-fa33-4d85-954b-5533708e7e9d" containerName="manila-share" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.951461 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9066febc-fa33-4d85-954b-5533708e7e9d" containerName="manila-share" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.951768 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9066febc-fa33-4d85-954b-5533708e7e9d" containerName="probe" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.951791 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9066febc-fa33-4d85-954b-5533708e7e9d" containerName="manila-share" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.953737 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.960263 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 24 18:44:17 crc kubenswrapper[4768]: I1124 18:44:17.972627 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.035374 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59a6e210-36bf-431b-a1b4-3784ec202cde-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.035479 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a6e210-36bf-431b-a1b4-3784ec202cde-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.035569 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59a6e210-36bf-431b-a1b4-3784ec202cde-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.035654 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/59a6e210-36bf-431b-a1b4-3784ec202cde-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.035700 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vzw2\" (UniqueName: \"kubernetes.io/projected/59a6e210-36bf-431b-a1b4-3784ec202cde-kube-api-access-4vzw2\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.035744 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/59a6e210-36bf-431b-a1b4-3784ec202cde-ceph\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.035764 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a6e210-36bf-431b-a1b4-3784ec202cde-config-data\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.035819 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59a6e210-36bf-431b-a1b4-3784ec202cde-scripts\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.138730 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/59a6e210-36bf-431b-a1b4-3784ec202cde-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.138805 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vzw2\" (UniqueName: \"kubernetes.io/projected/59a6e210-36bf-431b-a1b4-3784ec202cde-kube-api-access-4vzw2\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.138838 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/59a6e210-36bf-431b-a1b4-3784ec202cde-ceph\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.138870 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a6e210-36bf-431b-a1b4-3784ec202cde-config-data\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.138896 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/59a6e210-36bf-431b-a1b4-3784ec202cde-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.138936 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59a6e210-36bf-431b-a1b4-3784ec202cde-scripts\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.139020 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59a6e210-36bf-431b-a1b4-3784ec202cde-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.139093 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a6e210-36bf-431b-a1b4-3784ec202cde-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.139140 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59a6e210-36bf-431b-a1b4-3784ec202cde-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.139360 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59a6e210-36bf-431b-a1b4-3784ec202cde-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.144139 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/59a6e210-36bf-431b-a1b4-3784ec202cde-ceph\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.144354 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59a6e210-36bf-431b-a1b4-3784ec202cde-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.144895 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a6e210-36bf-431b-a1b4-3784ec202cde-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.145131 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a6e210-36bf-431b-a1b4-3784ec202cde-config-data\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.145612 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59a6e210-36bf-431b-a1b4-3784ec202cde-scripts\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.162413 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vzw2\" (UniqueName: \"kubernetes.io/projected/59a6e210-36bf-431b-a1b4-3784ec202cde-kube-api-access-4vzw2\") pod \"manila-share-share1-0\" (UID: \"59a6e210-36bf-431b-a1b4-3784ec202cde\") " pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.283265 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 18:44:18 crc kubenswrapper[4768]: I1124 18:44:18.969408 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:19.908554 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9066febc-fa33-4d85-954b-5533708e7e9d" path="/var/lib/kubelet/pods/9066febc-fa33-4d85-954b-5533708e7e9d/volumes" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:19.909803 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"59a6e210-36bf-431b-a1b4-3784ec202cde","Type":"ContainerStarted","Data":"0773d08f83b4704e1f6f1df538068dedc84c5b0b654a03ebfb21d6591b51833e"} Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:19.909831 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"59a6e210-36bf-431b-a1b4-3784ec202cde","Type":"ContainerStarted","Data":"ea201ab8edcf68c5b7b428bbe0780416396a1ea020d24492cba45e1299784e7e"} Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.105102 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mp249"] Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.108622 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.116658 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mp249"] Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.198294 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0808f00d-bd89-4029-a8f1-3c81c1b9b4cb-utilities\") pod \"certified-operators-mp249\" (UID: \"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb\") " pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.198671 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0808f00d-bd89-4029-a8f1-3c81c1b9b4cb-catalog-content\") pod \"certified-operators-mp249\" (UID: \"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb\") " pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.198885 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nsxb\" (UniqueName: \"kubernetes.io/projected/0808f00d-bd89-4029-a8f1-3c81c1b9b4cb-kube-api-access-6nsxb\") pod \"certified-operators-mp249\" (UID: \"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb\") " pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.301650 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0808f00d-bd89-4029-a8f1-3c81c1b9b4cb-utilities\") pod \"certified-operators-mp249\" (UID: \"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb\") " pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.302951 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0808f00d-bd89-4029-a8f1-3c81c1b9b4cb-catalog-content\") pod \"certified-operators-mp249\" (UID: \"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb\") " pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.302746 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0808f00d-bd89-4029-a8f1-3c81c1b9b4cb-utilities\") pod \"certified-operators-mp249\" (UID: \"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb\") " pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.303443 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nsxb\" (UniqueName: \"kubernetes.io/projected/0808f00d-bd89-4029-a8f1-3c81c1b9b4cb-kube-api-access-6nsxb\") pod \"certified-operators-mp249\" (UID: \"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb\") " pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.303668 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0808f00d-bd89-4029-a8f1-3c81c1b9b4cb-catalog-content\") pod \"certified-operators-mp249\" (UID: \"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb\") " pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.329742 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nsxb\" (UniqueName: \"kubernetes.io/projected/0808f00d-bd89-4029-a8f1-3c81c1b9b4cb-kube-api-access-6nsxb\") pod \"certified-operators-mp249\" (UID: \"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb\") " pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.475387 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.705456 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.820162 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-config-data\") pod \"f2833042-f8cd-458f-b1e9-dd1998838efd\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.820239 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-config-data-custom\") pod \"f2833042-f8cd-458f-b1e9-dd1998838efd\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.820333 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-combined-ca-bundle\") pod \"f2833042-f8cd-458f-b1e9-dd1998838efd\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.820376 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-scripts\") pod \"f2833042-f8cd-458f-b1e9-dd1998838efd\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.820510 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2833042-f8cd-458f-b1e9-dd1998838efd-etc-machine-id\") pod \"f2833042-f8cd-458f-b1e9-dd1998838efd\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.820539 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vmxv\" (UniqueName: \"kubernetes.io/projected/f2833042-f8cd-458f-b1e9-dd1998838efd-kube-api-access-2vmxv\") pod \"f2833042-f8cd-458f-b1e9-dd1998838efd\" (UID: \"f2833042-f8cd-458f-b1e9-dd1998838efd\") " Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.821072 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2833042-f8cd-458f-b1e9-dd1998838efd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f2833042-f8cd-458f-b1e9-dd1998838efd" (UID: "f2833042-f8cd-458f-b1e9-dd1998838efd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.826454 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f2833042-f8cd-458f-b1e9-dd1998838efd" (UID: "f2833042-f8cd-458f-b1e9-dd1998838efd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.828736 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mp249"] Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.842130 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2833042-f8cd-458f-b1e9-dd1998838efd-kube-api-access-2vmxv" (OuterVolumeSpecName: "kube-api-access-2vmxv") pod "f2833042-f8cd-458f-b1e9-dd1998838efd" (UID: "f2833042-f8cd-458f-b1e9-dd1998838efd"). InnerVolumeSpecName "kube-api-access-2vmxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.850670 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-scripts" (OuterVolumeSpecName: "scripts") pod "f2833042-f8cd-458f-b1e9-dd1998838efd" (UID: "f2833042-f8cd-458f-b1e9-dd1998838efd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.933815 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.933847 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2833042-f8cd-458f-b1e9-dd1998838efd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.933856 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vmxv\" (UniqueName: \"kubernetes.io/projected/f2833042-f8cd-458f-b1e9-dd1998838efd-kube-api-access-2vmxv\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.933865 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.944329 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2833042-f8cd-458f-b1e9-dd1998838efd" (UID: "f2833042-f8cd-458f-b1e9-dd1998838efd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.982667 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-config-data" (OuterVolumeSpecName: "config-data") pod "f2833042-f8cd-458f-b1e9-dd1998838efd" (UID: "f2833042-f8cd-458f-b1e9-dd1998838efd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:20 crc kubenswrapper[4768]: I1124 18:44:20.986881 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mp249" event={"ID":"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb","Type":"ContainerStarted","Data":"05ff08394024b049e0c2943d4598a46c043795ab4f0f8ac92e7d20e09463eb77"} Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.005831 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"59a6e210-36bf-431b-a1b4-3784ec202cde","Type":"ContainerStarted","Data":"1ff3ac4e4177aa2356d1620eb1cfc260930078ee3e9f2377823af6072951c094"} Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.024276 4768 generic.go:334] "Generic (PLEG): container finished" podID="f2833042-f8cd-458f-b1e9-dd1998838efd" containerID="6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c" exitCode=0 Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.024322 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f2833042-f8cd-458f-b1e9-dd1998838efd","Type":"ContainerDied","Data":"6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c"} Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.024354 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f2833042-f8cd-458f-b1e9-dd1998838efd","Type":"ContainerDied","Data":"cb1dcdda64163bf48d3c6ad9fa6e5e4541e6d871af81e8626cc9ba26c5a2981c"} Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.024373 4768 scope.go:117] "RemoveContainer" containerID="bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.024523 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.033327 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.033300623 podStartE2EDuration="4.033300623s" podCreationTimestamp="2025-11-24 18:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:44:21.02826463 +0000 UTC m=+3299.888846397" watchObservedRunningTime="2025-11-24 18:44:21.033300623 +0000 UTC m=+3299.893882400" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.035854 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.035888 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2833042-f8cd-458f-b1e9-dd1998838efd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.064565 4768 scope.go:117] "RemoveContainer" containerID="6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.079555 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.094191 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.101158 4768 scope.go:117] "RemoveContainer" containerID="bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8" Nov 24 18:44:21 crc kubenswrapper[4768]: E1124 18:44:21.105186 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8\": container with ID starting with bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8 not found: ID does not exist" containerID="bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.105264 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8"} err="failed to get container status \"bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8\": rpc error: code = NotFound desc = could not find container \"bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8\": container with ID starting with bd68ff3363c01cea7d6812f71374f6817e1ef49abcf44e3c2edf041e00e567a8 not found: ID does not exist" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.105305 4768 scope.go:117] "RemoveContainer" containerID="6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c" Nov 24 18:44:21 crc kubenswrapper[4768]: E1124 18:44:21.108856 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c\": container with ID starting with 6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c not found: ID does not exist" containerID="6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.108943 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c"} err="failed to get container status \"6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c\": rpc error: code = NotFound desc = could not find container \"6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c\": container with ID starting with 6cac60bb20f5758f9dd538aeb74418fcb103e58c556d9169de11baeb89b9be8c not found: ID does not exist" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.114655 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 18:44:21 crc kubenswrapper[4768]: E1124 18:44:21.115838 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2833042-f8cd-458f-b1e9-dd1998838efd" containerName="manila-scheduler" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.115856 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2833042-f8cd-458f-b1e9-dd1998838efd" containerName="manila-scheduler" Nov 24 18:44:21 crc kubenswrapper[4768]: E1124 18:44:21.115910 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2833042-f8cd-458f-b1e9-dd1998838efd" containerName="probe" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.115920 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2833042-f8cd-458f-b1e9-dd1998838efd" containerName="probe" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.116347 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2833042-f8cd-458f-b1e9-dd1998838efd" containerName="manila-scheduler" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.116370 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2833042-f8cd-458f-b1e9-dd1998838efd" containerName="probe" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.118699 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.121342 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.146142 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.254951 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685b1427-a20b-4fb0-a6c9-42ec98f11d67-config-data\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.255252 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/685b1427-a20b-4fb0-a6c9-42ec98f11d67-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.255380 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/685b1427-a20b-4fb0-a6c9-42ec98f11d67-scripts\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.255540 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685b1427-a20b-4fb0-a6c9-42ec98f11d67-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.255655 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/685b1427-a20b-4fb0-a6c9-42ec98f11d67-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.255741 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgnzb\" (UniqueName: \"kubernetes.io/projected/685b1427-a20b-4fb0-a6c9-42ec98f11d67-kube-api-access-tgnzb\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.357629 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/685b1427-a20b-4fb0-a6c9-42ec98f11d67-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.357944 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/685b1427-a20b-4fb0-a6c9-42ec98f11d67-scripts\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.357983 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685b1427-a20b-4fb0-a6c9-42ec98f11d67-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.358010 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/685b1427-a20b-4fb0-a6c9-42ec98f11d67-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.358034 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgnzb\" (UniqueName: \"kubernetes.io/projected/685b1427-a20b-4fb0-a6c9-42ec98f11d67-kube-api-access-tgnzb\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.358143 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685b1427-a20b-4fb0-a6c9-42ec98f11d67-config-data\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.359178 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/685b1427-a20b-4fb0-a6c9-42ec98f11d67-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.363823 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/685b1427-a20b-4fb0-a6c9-42ec98f11d67-scripts\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.364133 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685b1427-a20b-4fb0-a6c9-42ec98f11d67-config-data\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.364326 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/685b1427-a20b-4fb0-a6c9-42ec98f11d67-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.365038 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685b1427-a20b-4fb0-a6c9-42ec98f11d67-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.388545 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgnzb\" (UniqueName: \"kubernetes.io/projected/685b1427-a20b-4fb0-a6c9-42ec98f11d67-kube-api-access-tgnzb\") pod \"manila-scheduler-0\" (UID: \"685b1427-a20b-4fb0-a6c9-42ec98f11d67\") " pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.434598 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.867258 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.941092 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2833042-f8cd-458f-b1e9-dd1998838efd" path="/var/lib/kubelet/pods/f2833042-f8cd-458f-b1e9-dd1998838efd/volumes" Nov 24 18:44:21 crc kubenswrapper[4768]: I1124 18:44:21.943318 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 18:44:22 crc kubenswrapper[4768]: I1124 18:44:22.049646 4768 generic.go:334] "Generic (PLEG): container finished" podID="0808f00d-bd89-4029-a8f1-3c81c1b9b4cb" containerID="ffd646f4b23ef2c6561d09a8c84625e7cffc0525f7ba47c055e62691b486ec1a" exitCode=0 Nov 24 18:44:22 crc kubenswrapper[4768]: I1124 18:44:22.050660 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mp249" event={"ID":"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb","Type":"ContainerDied","Data":"ffd646f4b23ef2c6561d09a8c84625e7cffc0525f7ba47c055e62691b486ec1a"} Nov 24 18:44:22 crc kubenswrapper[4768]: I1124 18:44:22.054354 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"685b1427-a20b-4fb0-a6c9-42ec98f11d67","Type":"ContainerStarted","Data":"aeeb5939247b1e318f2e585f72606bbfb6f83b08cafff532be25f1e0914ff428"} Nov 24 18:44:23 crc kubenswrapper[4768]: I1124 18:44:23.068365 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"685b1427-a20b-4fb0-a6c9-42ec98f11d67","Type":"ContainerStarted","Data":"729a31648878148cae9fa53a77eac16737b85d580136a42c4a0ead9f967e8c0a"} Nov 24 18:44:23 crc kubenswrapper[4768]: I1124 18:44:23.068825 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"685b1427-a20b-4fb0-a6c9-42ec98f11d67","Type":"ContainerStarted","Data":"817636595d009b93fd5fc525f4d28ce904ed9f6705644c51c725415803d15c66"} Nov 24 18:44:23 crc kubenswrapper[4768]: I1124 18:44:23.095117 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.095091437 podStartE2EDuration="2.095091437s" podCreationTimestamp="2025-11-24 18:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:44:23.088075901 +0000 UTC m=+3301.948657688" watchObservedRunningTime="2025-11-24 18:44:23.095091437 +0000 UTC m=+3301.955673214" Nov 24 18:44:27 crc kubenswrapper[4768]: I1124 18:44:27.111847 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mp249" event={"ID":"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb","Type":"ContainerStarted","Data":"cd0e3e2dc569d117fd088bdcac51ceb3fa7a37baa0305a2e191f303068cbed8d"} Nov 24 18:44:27 crc kubenswrapper[4768]: I1124 18:44:27.191527 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-685ddbdf68-6mjzl" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Nov 24 18:44:27 crc kubenswrapper[4768]: I1124 18:44:27.191725 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:44:28 crc kubenswrapper[4768]: I1124 18:44:28.132034 4768 generic.go:334] "Generic (PLEG): container finished" podID="0808f00d-bd89-4029-a8f1-3c81c1b9b4cb" containerID="cd0e3e2dc569d117fd088bdcac51ceb3fa7a37baa0305a2e191f303068cbed8d" exitCode=0 Nov 24 18:44:28 crc kubenswrapper[4768]: I1124 18:44:28.132511 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mp249" event={"ID":"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb","Type":"ContainerDied","Data":"cd0e3e2dc569d117fd088bdcac51ceb3fa7a37baa0305a2e191f303068cbed8d"} Nov 24 18:44:28 crc kubenswrapper[4768]: I1124 18:44:28.283558 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 24 18:44:29 crc kubenswrapper[4768]: I1124 18:44:29.149132 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mp249" event={"ID":"0808f00d-bd89-4029-a8f1-3c81c1b9b4cb","Type":"ContainerStarted","Data":"8ceadeea69a6c3b5d31da8fe127034c887f46bed87434c389b4fe6c351ba11bc"} Nov 24 18:44:30 crc kubenswrapper[4768]: I1124 18:44:30.476174 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:30 crc kubenswrapper[4768]: I1124 18:44:30.476536 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:31 crc kubenswrapper[4768]: I1124 18:44:31.435674 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 24 18:44:31 crc kubenswrapper[4768]: I1124 18:44:31.555268 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mp249" podUID="0808f00d-bd89-4029-a8f1-3c81c1b9b4cb" containerName="registry-server" probeResult="failure" output=< Nov 24 18:44:31 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Nov 24 18:44:31 crc kubenswrapper[4768]: > Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.071390 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.104851 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mp249" podStartSLOduration=6.588202789 podStartE2EDuration="13.104827033s" podCreationTimestamp="2025-11-24 18:44:20 +0000 UTC" firstStartedPulling="2025-11-24 18:44:22.053939198 +0000 UTC m=+3300.914520975" lastFinishedPulling="2025-11-24 18:44:28.570563402 +0000 UTC m=+3307.431145219" observedRunningTime="2025-11-24 18:44:29.180864651 +0000 UTC m=+3308.041446418" watchObservedRunningTime="2025-11-24 18:44:33.104827033 +0000 UTC m=+3311.965408820" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.148036 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-combined-ca-bundle\") pod \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.148143 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpfns\" (UniqueName: \"kubernetes.io/projected/375f8ae8-797c-40c7-bd90-93b3538ff9aa-kube-api-access-kpfns\") pod \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.148195 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-horizon-tls-certs\") pod \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.148221 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/375f8ae8-797c-40c7-bd90-93b3538ff9aa-scripts\") pod \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.148246 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-horizon-secret-key\") pod \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.148395 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/375f8ae8-797c-40c7-bd90-93b3538ff9aa-logs\") pod \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.148514 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/375f8ae8-797c-40c7-bd90-93b3538ff9aa-config-data\") pod \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\" (UID: \"375f8ae8-797c-40c7-bd90-93b3538ff9aa\") " Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.148876 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/375f8ae8-797c-40c7-bd90-93b3538ff9aa-logs" (OuterVolumeSpecName: "logs") pod "375f8ae8-797c-40c7-bd90-93b3538ff9aa" (UID: "375f8ae8-797c-40c7-bd90-93b3538ff9aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.149088 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/375f8ae8-797c-40c7-bd90-93b3538ff9aa-logs\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.156384 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "375f8ae8-797c-40c7-bd90-93b3538ff9aa" (UID: "375f8ae8-797c-40c7-bd90-93b3538ff9aa"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.156628 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375f8ae8-797c-40c7-bd90-93b3538ff9aa-kube-api-access-kpfns" (OuterVolumeSpecName: "kube-api-access-kpfns") pod "375f8ae8-797c-40c7-bd90-93b3538ff9aa" (UID: "375f8ae8-797c-40c7-bd90-93b3538ff9aa"). InnerVolumeSpecName "kube-api-access-kpfns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.191084 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "375f8ae8-797c-40c7-bd90-93b3538ff9aa" (UID: "375f8ae8-797c-40c7-bd90-93b3538ff9aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.194088 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375f8ae8-797c-40c7-bd90-93b3538ff9aa-config-data" (OuterVolumeSpecName: "config-data") pod "375f8ae8-797c-40c7-bd90-93b3538ff9aa" (UID: "375f8ae8-797c-40c7-bd90-93b3538ff9aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.196343 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375f8ae8-797c-40c7-bd90-93b3538ff9aa-scripts" (OuterVolumeSpecName: "scripts") pod "375f8ae8-797c-40c7-bd90-93b3538ff9aa" (UID: "375f8ae8-797c-40c7-bd90-93b3538ff9aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.213131 4768 generic.go:334] "Generic (PLEG): container finished" podID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerID="dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f" exitCode=137 Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.213189 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-685ddbdf68-6mjzl" event={"ID":"375f8ae8-797c-40c7-bd90-93b3538ff9aa","Type":"ContainerDied","Data":"dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f"} Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.213203 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-685ddbdf68-6mjzl" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.213255 4768 scope.go:117] "RemoveContainer" containerID="9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.213242 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-685ddbdf68-6mjzl" event={"ID":"375f8ae8-797c-40c7-bd90-93b3538ff9aa","Type":"ContainerDied","Data":"ee2d7b28768803186431aa229adda2f0e45f3372409cccdbcde283cddad9ea28"} Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.218842 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "375f8ae8-797c-40c7-bd90-93b3538ff9aa" (UID: "375f8ae8-797c-40c7-bd90-93b3538ff9aa"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.251671 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.251717 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpfns\" (UniqueName: \"kubernetes.io/projected/375f8ae8-797c-40c7-bd90-93b3538ff9aa-kube-api-access-kpfns\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.251730 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.251743 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/375f8ae8-797c-40c7-bd90-93b3538ff9aa-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.251754 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/375f8ae8-797c-40c7-bd90-93b3538ff9aa-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.251764 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/375f8ae8-797c-40c7-bd90-93b3538ff9aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.438685 4768 scope.go:117] "RemoveContainer" containerID="dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.477852 4768 scope.go:117] "RemoveContainer" containerID="9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0" Nov 24 18:44:33 crc kubenswrapper[4768]: E1124 18:44:33.478456 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0\": container with ID starting with 9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0 not found: ID does not exist" containerID="9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.478505 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0"} err="failed to get container status \"9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0\": rpc error: code = NotFound desc = could not find container \"9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0\": container with ID starting with 9d5defbbb888d0480a9bd00acdbf027c73eab09218669780b95ebff9badadda0 not found: ID does not exist" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.478535 4768 scope.go:117] "RemoveContainer" containerID="dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f" Nov 24 18:44:33 crc kubenswrapper[4768]: E1124 18:44:33.478849 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f\": container with ID starting with dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f not found: ID does not exist" containerID="dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.478879 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f"} err="failed to get container status \"dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f\": rpc error: code = NotFound desc = could not find container \"dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f\": container with ID starting with dd70561e5068e1e0f3df25c9e16da4a8b4293262fac37158993694aa2ea8335f not found: ID does not exist" Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.552185 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-685ddbdf68-6mjzl"] Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.561247 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-685ddbdf68-6mjzl"] Nov 24 18:44:33 crc kubenswrapper[4768]: I1124 18:44:33.912228 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" path="/var/lib/kubelet/pods/375f8ae8-797c-40c7-bd90-93b3538ff9aa/volumes" Nov 24 18:44:38 crc kubenswrapper[4768]: I1124 18:44:38.087121 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 18:44:39 crc kubenswrapper[4768]: I1124 18:44:39.791795 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 24 18:44:40 crc kubenswrapper[4768]: I1124 18:44:40.555151 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:40 crc kubenswrapper[4768]: I1124 18:44:40.641672 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mp249" Nov 24 18:44:40 crc kubenswrapper[4768]: I1124 18:44:40.722906 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mp249"] Nov 24 18:44:40 crc kubenswrapper[4768]: I1124 18:44:40.817613 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zzf6q"] Nov 24 18:44:40 crc kubenswrapper[4768]: I1124 18:44:40.819112 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zzf6q" podUID="897ec217-5614-490e-893e-52e2f87b7422" containerName="registry-server" containerID="cri-o://fb8ec67bf788812bef579c2597a6151eb95b6192e2cd6225d790f1f2853bce55" gracePeriod=2 Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.303845 4768 generic.go:334] "Generic (PLEG): container finished" podID="897ec217-5614-490e-893e-52e2f87b7422" containerID="fb8ec67bf788812bef579c2597a6151eb95b6192e2cd6225d790f1f2853bce55" exitCode=0 Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.303915 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzf6q" event={"ID":"897ec217-5614-490e-893e-52e2f87b7422","Type":"ContainerDied","Data":"fb8ec67bf788812bef579c2597a6151eb95b6192e2cd6225d790f1f2853bce55"} Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.304478 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzf6q" event={"ID":"897ec217-5614-490e-893e-52e2f87b7422","Type":"ContainerDied","Data":"ed20f1d6fc627134322c5124befd3c12a02f42edc7f5e8518bce5fc65d00fe42"} Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.304520 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed20f1d6fc627134322c5124befd3c12a02f42edc7f5e8518bce5fc65d00fe42" Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.332124 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.444019 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897ec217-5614-490e-893e-52e2f87b7422-utilities\") pod \"897ec217-5614-490e-893e-52e2f87b7422\" (UID: \"897ec217-5614-490e-893e-52e2f87b7422\") " Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.444084 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897ec217-5614-490e-893e-52e2f87b7422-catalog-content\") pod \"897ec217-5614-490e-893e-52e2f87b7422\" (UID: \"897ec217-5614-490e-893e-52e2f87b7422\") " Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.444259 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpwlv\" (UniqueName: \"kubernetes.io/projected/897ec217-5614-490e-893e-52e2f87b7422-kube-api-access-mpwlv\") pod \"897ec217-5614-490e-893e-52e2f87b7422\" (UID: \"897ec217-5614-490e-893e-52e2f87b7422\") " Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.444739 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/897ec217-5614-490e-893e-52e2f87b7422-utilities" (OuterVolumeSpecName: "utilities") pod "897ec217-5614-490e-893e-52e2f87b7422" (UID: "897ec217-5614-490e-893e-52e2f87b7422"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.505228 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/897ec217-5614-490e-893e-52e2f87b7422-kube-api-access-mpwlv" (OuterVolumeSpecName: "kube-api-access-mpwlv") pod "897ec217-5614-490e-893e-52e2f87b7422" (UID: "897ec217-5614-490e-893e-52e2f87b7422"). InnerVolumeSpecName "kube-api-access-mpwlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.536875 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/897ec217-5614-490e-893e-52e2f87b7422-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "897ec217-5614-490e-893e-52e2f87b7422" (UID: "897ec217-5614-490e-893e-52e2f87b7422"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.546994 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpwlv\" (UniqueName: \"kubernetes.io/projected/897ec217-5614-490e-893e-52e2f87b7422-kube-api-access-mpwlv\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.547038 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/897ec217-5614-490e-893e-52e2f87b7422-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:41 crc kubenswrapper[4768]: I1124 18:44:41.547051 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/897ec217-5614-490e-893e-52e2f87b7422-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:44:42 crc kubenswrapper[4768]: I1124 18:44:42.314362 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzf6q" Nov 24 18:44:42 crc kubenswrapper[4768]: I1124 18:44:42.343403 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zzf6q"] Nov 24 18:44:42 crc kubenswrapper[4768]: I1124 18:44:42.358502 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zzf6q"] Nov 24 18:44:42 crc kubenswrapper[4768]: I1124 18:44:42.964838 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 24 18:44:43 crc kubenswrapper[4768]: I1124 18:44:43.908950 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="897ec217-5614-490e-893e-52e2f87b7422" path="/var/lib/kubelet/pods/897ec217-5614-490e-893e-52e2f87b7422/volumes" Nov 24 18:44:50 crc kubenswrapper[4768]: I1124 18:44:50.074184 4768 scope.go:117] "RemoveContainer" containerID="81d6f4cf9f89c103d1c185e18d10d08ef9ded5976903ffd1d906d8bfc349b5ef" Nov 24 18:44:50 crc kubenswrapper[4768]: I1124 18:44:50.104772 4768 scope.go:117] "RemoveContainer" containerID="f56ca54822029768a00aa9ef3f3d65afb4a5d3420beac7790d9f7862af2a0dd1" Nov 24 18:44:50 crc kubenswrapper[4768]: I1124 18:44:50.130961 4768 scope.go:117] "RemoveContainer" containerID="6ce3d787f405cb55f2496ac50e073ef9076246a1732c5891e4805da40731dea6" Nov 24 18:44:50 crc kubenswrapper[4768]: I1124 18:44:50.184145 4768 scope.go:117] "RemoveContainer" containerID="e69bae1c93e3efacb4eb74e45dc2663d4eebf35b375821c2f9cd6d5f63a9854e" Nov 24 18:44:50 crc kubenswrapper[4768]: I1124 18:44:50.206533 4768 scope.go:117] "RemoveContainer" containerID="c53e0604152b2d13e447c6824d9047eff5af845863468352f4a02e9e69565251" Nov 24 18:44:50 crc kubenswrapper[4768]: I1124 18:44:50.227042 4768 scope.go:117] "RemoveContainer" containerID="3b0d7ed3258fac4bab7163a55e1906ec0dc4b92ec306bbb59e565df493e83d18" Nov 24 18:44:50 crc kubenswrapper[4768]: I1124 18:44:50.249155 4768 scope.go:117] "RemoveContainer" containerID="fb8ec67bf788812bef579c2597a6151eb95b6192e2cd6225d790f1f2853bce55" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.188037 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk"] Nov 24 18:45:00 crc kubenswrapper[4768]: E1124 18:45:00.189077 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897ec217-5614-490e-893e-52e2f87b7422" containerName="registry-server" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.189096 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="897ec217-5614-490e-893e-52e2f87b7422" containerName="registry-server" Nov 24 18:45:00 crc kubenswrapper[4768]: E1124 18:45:00.189108 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerName="horizon-log" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.189116 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerName="horizon-log" Nov 24 18:45:00 crc kubenswrapper[4768]: E1124 18:45:00.189130 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897ec217-5614-490e-893e-52e2f87b7422" containerName="extract-utilities" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.189143 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="897ec217-5614-490e-893e-52e2f87b7422" containerName="extract-utilities" Nov 24 18:45:00 crc kubenswrapper[4768]: E1124 18:45:00.189164 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerName="horizon" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.189175 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerName="horizon" Nov 24 18:45:00 crc kubenswrapper[4768]: E1124 18:45:00.189199 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897ec217-5614-490e-893e-52e2f87b7422" containerName="extract-content" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.189210 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="897ec217-5614-490e-893e-52e2f87b7422" containerName="extract-content" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.189478 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerName="horizon" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.189529 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="375f8ae8-797c-40c7-bd90-93b3538ff9aa" containerName="horizon-log" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.189542 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="897ec217-5614-490e-893e-52e2f87b7422" containerName="registry-server" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.190412 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.194359 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.194776 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.218897 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk"] Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.280736 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-config-volume\") pod \"collect-profiles-29400165-sdgjk\" (UID: \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.281041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slxx8\" (UniqueName: \"kubernetes.io/projected/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-kube-api-access-slxx8\") pod \"collect-profiles-29400165-sdgjk\" (UID: \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.281177 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-secret-volume\") pod \"collect-profiles-29400165-sdgjk\" (UID: \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.382999 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slxx8\" (UniqueName: \"kubernetes.io/projected/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-kube-api-access-slxx8\") pod \"collect-profiles-29400165-sdgjk\" (UID: \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.383097 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-secret-volume\") pod \"collect-profiles-29400165-sdgjk\" (UID: \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.383250 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-config-volume\") pod \"collect-profiles-29400165-sdgjk\" (UID: \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.384543 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-config-volume\") pod \"collect-profiles-29400165-sdgjk\" (UID: \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.390477 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-secret-volume\") pod \"collect-profiles-29400165-sdgjk\" (UID: \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.400731 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slxx8\" (UniqueName: \"kubernetes.io/projected/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-kube-api-access-slxx8\") pod \"collect-profiles-29400165-sdgjk\" (UID: \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:00 crc kubenswrapper[4768]: I1124 18:45:00.524439 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:01 crc kubenswrapper[4768]: I1124 18:45:01.027269 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk"] Nov 24 18:45:01 crc kubenswrapper[4768]: I1124 18:45:01.519096 4768 generic.go:334] "Generic (PLEG): container finished" podID="4c1f5a20-5b45-4b85-ae34-23b3afa0becf" containerID="343c5714e51b5b23f45d9c0eee53ba87dca6ae8b94efd9e49e3c591b8192e7e4" exitCode=0 Nov 24 18:45:01 crc kubenswrapper[4768]: I1124 18:45:01.519154 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" event={"ID":"4c1f5a20-5b45-4b85-ae34-23b3afa0becf","Type":"ContainerDied","Data":"343c5714e51b5b23f45d9c0eee53ba87dca6ae8b94efd9e49e3c591b8192e7e4"} Nov 24 18:45:01 crc kubenswrapper[4768]: I1124 18:45:01.519190 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" event={"ID":"4c1f5a20-5b45-4b85-ae34-23b3afa0becf","Type":"ContainerStarted","Data":"cdc19fda9b2d0d0699245bd96e4755ca44d21b8517658f8d6ea9236697297cbd"} Nov 24 18:45:02 crc kubenswrapper[4768]: I1124 18:45:02.988176 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:03 crc kubenswrapper[4768]: I1124 18:45:03.155148 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slxx8\" (UniqueName: \"kubernetes.io/projected/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-kube-api-access-slxx8\") pod \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\" (UID: \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\") " Nov 24 18:45:03 crc kubenswrapper[4768]: I1124 18:45:03.155205 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-config-volume\") pod \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\" (UID: \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\") " Nov 24 18:45:03 crc kubenswrapper[4768]: I1124 18:45:03.155264 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-secret-volume\") pod \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\" (UID: \"4c1f5a20-5b45-4b85-ae34-23b3afa0becf\") " Nov 24 18:45:03 crc kubenswrapper[4768]: I1124 18:45:03.155992 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-config-volume" (OuterVolumeSpecName: "config-volume") pod "4c1f5a20-5b45-4b85-ae34-23b3afa0becf" (UID: "4c1f5a20-5b45-4b85-ae34-23b3afa0becf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:45:03 crc kubenswrapper[4768]: I1124 18:45:03.161601 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-kube-api-access-slxx8" (OuterVolumeSpecName: "kube-api-access-slxx8") pod "4c1f5a20-5b45-4b85-ae34-23b3afa0becf" (UID: "4c1f5a20-5b45-4b85-ae34-23b3afa0becf"). InnerVolumeSpecName "kube-api-access-slxx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:45:03 crc kubenswrapper[4768]: I1124 18:45:03.162183 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4c1f5a20-5b45-4b85-ae34-23b3afa0becf" (UID: "4c1f5a20-5b45-4b85-ae34-23b3afa0becf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:45:03 crc kubenswrapper[4768]: I1124 18:45:03.258604 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slxx8\" (UniqueName: \"kubernetes.io/projected/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-kube-api-access-slxx8\") on node \"crc\" DevicePath \"\"" Nov 24 18:45:03 crc kubenswrapper[4768]: I1124 18:45:03.258649 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 18:45:03 crc kubenswrapper[4768]: I1124 18:45:03.258663 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c1f5a20-5b45-4b85-ae34-23b3afa0becf-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 18:45:03 crc kubenswrapper[4768]: I1124 18:45:03.547144 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" event={"ID":"4c1f5a20-5b45-4b85-ae34-23b3afa0becf","Type":"ContainerDied","Data":"cdc19fda9b2d0d0699245bd96e4755ca44d21b8517658f8d6ea9236697297cbd"} Nov 24 18:45:03 crc kubenswrapper[4768]: I1124 18:45:03.547208 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdc19fda9b2d0d0699245bd96e4755ca44d21b8517658f8d6ea9236697297cbd" Nov 24 18:45:03 crc kubenswrapper[4768]: I1124 18:45:03.547241 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400165-sdgjk" Nov 24 18:45:04 crc kubenswrapper[4768]: I1124 18:45:04.094125 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9"] Nov 24 18:45:04 crc kubenswrapper[4768]: I1124 18:45:04.103736 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400120-gfmb9"] Nov 24 18:45:05 crc kubenswrapper[4768]: I1124 18:45:05.909662 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e" path="/var/lib/kubelet/pods/6f5b4be5-f22d-4371-b8bb-ad4c61f5f29e/volumes" Nov 24 18:45:06 crc kubenswrapper[4768]: I1124 18:45:06.930885 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g4959"] Nov 24 18:45:06 crc kubenswrapper[4768]: E1124 18:45:06.931729 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1f5a20-5b45-4b85-ae34-23b3afa0becf" containerName="collect-profiles" Nov 24 18:45:06 crc kubenswrapper[4768]: I1124 18:45:06.931743 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1f5a20-5b45-4b85-ae34-23b3afa0becf" containerName="collect-profiles" Nov 24 18:45:06 crc kubenswrapper[4768]: I1124 18:45:06.931973 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1f5a20-5b45-4b85-ae34-23b3afa0becf" containerName="collect-profiles" Nov 24 18:45:06 crc kubenswrapper[4768]: I1124 18:45:06.933696 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:06 crc kubenswrapper[4768]: I1124 18:45:06.976964 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g4959"] Nov 24 18:45:07 crc kubenswrapper[4768]: I1124 18:45:07.061522 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-catalog-content\") pod \"community-operators-g4959\" (UID: \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\") " pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:07 crc kubenswrapper[4768]: I1124 18:45:07.061560 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-utilities\") pod \"community-operators-g4959\" (UID: \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\") " pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:07 crc kubenswrapper[4768]: I1124 18:45:07.061584 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmkfm\" (UniqueName: \"kubernetes.io/projected/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-kube-api-access-rmkfm\") pod \"community-operators-g4959\" (UID: \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\") " pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:07 crc kubenswrapper[4768]: I1124 18:45:07.163985 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-catalog-content\") pod \"community-operators-g4959\" (UID: \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\") " pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:07 crc kubenswrapper[4768]: I1124 18:45:07.164045 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-utilities\") pod \"community-operators-g4959\" (UID: \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\") " pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:07 crc kubenswrapper[4768]: I1124 18:45:07.164084 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmkfm\" (UniqueName: \"kubernetes.io/projected/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-kube-api-access-rmkfm\") pod \"community-operators-g4959\" (UID: \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\") " pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:07 crc kubenswrapper[4768]: I1124 18:45:07.164592 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-catalog-content\") pod \"community-operators-g4959\" (UID: \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\") " pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:07 crc kubenswrapper[4768]: I1124 18:45:07.164672 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-utilities\") pod \"community-operators-g4959\" (UID: \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\") " pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:07 crc kubenswrapper[4768]: I1124 18:45:07.192817 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmkfm\" (UniqueName: \"kubernetes.io/projected/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-kube-api-access-rmkfm\") pod \"community-operators-g4959\" (UID: \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\") " pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:07 crc kubenswrapper[4768]: I1124 18:45:07.285275 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:07 crc kubenswrapper[4768]: I1124 18:45:07.917910 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g4959"] Nov 24 18:45:08 crc kubenswrapper[4768]: I1124 18:45:08.621780 4768 generic.go:334] "Generic (PLEG): container finished" podID="6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" containerID="810ea5fa12801656718aa55ecda0a2d9c796d11afbcc56ff8cd31ca0cde387e8" exitCode=0 Nov 24 18:45:08 crc kubenswrapper[4768]: I1124 18:45:08.621846 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4959" event={"ID":"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57","Type":"ContainerDied","Data":"810ea5fa12801656718aa55ecda0a2d9c796d11afbcc56ff8cd31ca0cde387e8"} Nov 24 18:45:08 crc kubenswrapper[4768]: I1124 18:45:08.622098 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4959" event={"ID":"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57","Type":"ContainerStarted","Data":"0401cbc9aa5f66d2853d003fe7679bfa9e110949c89372d3dbb6538e131e32c5"} Nov 24 18:45:09 crc kubenswrapper[4768]: I1124 18:45:09.634668 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4959" event={"ID":"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57","Type":"ContainerStarted","Data":"f4afaa45ba723de8a7bb6e6bfc2e27d6a8b553f7c5d63526bc5ee3ffe6caed2f"} Nov 24 18:45:10 crc kubenswrapper[4768]: I1124 18:45:10.650250 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4959" event={"ID":"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57","Type":"ContainerDied","Data":"f4afaa45ba723de8a7bb6e6bfc2e27d6a8b553f7c5d63526bc5ee3ffe6caed2f"} Nov 24 18:45:10 crc kubenswrapper[4768]: I1124 18:45:10.650127 4768 generic.go:334] "Generic (PLEG): container finished" podID="6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" containerID="f4afaa45ba723de8a7bb6e6bfc2e27d6a8b553f7c5d63526bc5ee3ffe6caed2f" exitCode=0 Nov 24 18:45:11 crc kubenswrapper[4768]: I1124 18:45:11.683776 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4959" event={"ID":"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57","Type":"ContainerStarted","Data":"aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e"} Nov 24 18:45:11 crc kubenswrapper[4768]: I1124 18:45:11.726141 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g4959" podStartSLOduration=3.19949129 podStartE2EDuration="5.726107072s" podCreationTimestamp="2025-11-24 18:45:06 +0000 UTC" firstStartedPulling="2025-11-24 18:45:08.62508563 +0000 UTC m=+3347.485667417" lastFinishedPulling="2025-11-24 18:45:11.151701382 +0000 UTC m=+3350.012283199" observedRunningTime="2025-11-24 18:45:11.714995407 +0000 UTC m=+3350.575577184" watchObservedRunningTime="2025-11-24 18:45:11.726107072 +0000 UTC m=+3350.586688869" Nov 24 18:45:17 crc kubenswrapper[4768]: I1124 18:45:17.286382 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:17 crc kubenswrapper[4768]: I1124 18:45:17.287332 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:17 crc kubenswrapper[4768]: I1124 18:45:17.349863 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:17 crc kubenswrapper[4768]: I1124 18:45:17.835728 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:17 crc kubenswrapper[4768]: I1124 18:45:17.931406 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g4959"] Nov 24 18:45:19 crc kubenswrapper[4768]: I1124 18:45:19.787777 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g4959" podUID="6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" containerName="registry-server" containerID="cri-o://aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e" gracePeriod=2 Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.255172 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.375648 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-catalog-content\") pod \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\" (UID: \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\") " Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.375805 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmkfm\" (UniqueName: \"kubernetes.io/projected/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-kube-api-access-rmkfm\") pod \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\" (UID: \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\") " Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.375838 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-utilities\") pod \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\" (UID: \"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57\") " Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.376766 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-utilities" (OuterVolumeSpecName: "utilities") pod "6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" (UID: "6390ae24-7ebe-4ccf-b29f-ccd53d22dd57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.385095 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-kube-api-access-rmkfm" (OuterVolumeSpecName: "kube-api-access-rmkfm") pod "6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" (UID: "6390ae24-7ebe-4ccf-b29f-ccd53d22dd57"). InnerVolumeSpecName "kube-api-access-rmkfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.451113 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" (UID: "6390ae24-7ebe-4ccf-b29f-ccd53d22dd57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.479025 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.479090 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmkfm\" (UniqueName: \"kubernetes.io/projected/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-kube-api-access-rmkfm\") on node \"crc\" DevicePath \"\"" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.479120 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.804618 4768 generic.go:334] "Generic (PLEG): container finished" podID="6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" containerID="aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e" exitCode=0 Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.804677 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4959" event={"ID":"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57","Type":"ContainerDied","Data":"aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e"} Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.804712 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4959" event={"ID":"6390ae24-7ebe-4ccf-b29f-ccd53d22dd57","Type":"ContainerDied","Data":"0401cbc9aa5f66d2853d003fe7679bfa9e110949c89372d3dbb6538e131e32c5"} Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.804734 4768 scope.go:117] "RemoveContainer" containerID="aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.804919 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4959" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.849505 4768 scope.go:117] "RemoveContainer" containerID="f4afaa45ba723de8a7bb6e6bfc2e27d6a8b553f7c5d63526bc5ee3ffe6caed2f" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.858786 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g4959"] Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.866909 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g4959"] Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.894149 4768 scope.go:117] "RemoveContainer" containerID="810ea5fa12801656718aa55ecda0a2d9c796d11afbcc56ff8cd31ca0cde387e8" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.956529 4768 scope.go:117] "RemoveContainer" containerID="aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e" Nov 24 18:45:20 crc kubenswrapper[4768]: E1124 18:45:20.957087 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e\": container with ID starting with aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e not found: ID does not exist" containerID="aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.957181 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e"} err="failed to get container status \"aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e\": rpc error: code = NotFound desc = could not find container \"aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e\": container with ID starting with aa66eff5dee6c53231a1ec3ecf558d09439d4a952fc3a9f8c6e57fe57658e47e not found: ID does not exist" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.957239 4768 scope.go:117] "RemoveContainer" containerID="f4afaa45ba723de8a7bb6e6bfc2e27d6a8b553f7c5d63526bc5ee3ffe6caed2f" Nov 24 18:45:20 crc kubenswrapper[4768]: E1124 18:45:20.957813 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4afaa45ba723de8a7bb6e6bfc2e27d6a8b553f7c5d63526bc5ee3ffe6caed2f\": container with ID starting with f4afaa45ba723de8a7bb6e6bfc2e27d6a8b553f7c5d63526bc5ee3ffe6caed2f not found: ID does not exist" containerID="f4afaa45ba723de8a7bb6e6bfc2e27d6a8b553f7c5d63526bc5ee3ffe6caed2f" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.957877 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4afaa45ba723de8a7bb6e6bfc2e27d6a8b553f7c5d63526bc5ee3ffe6caed2f"} err="failed to get container status \"f4afaa45ba723de8a7bb6e6bfc2e27d6a8b553f7c5d63526bc5ee3ffe6caed2f\": rpc error: code = NotFound desc = could not find container \"f4afaa45ba723de8a7bb6e6bfc2e27d6a8b553f7c5d63526bc5ee3ffe6caed2f\": container with ID starting with f4afaa45ba723de8a7bb6e6bfc2e27d6a8b553f7c5d63526bc5ee3ffe6caed2f not found: ID does not exist" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.957916 4768 scope.go:117] "RemoveContainer" containerID="810ea5fa12801656718aa55ecda0a2d9c796d11afbcc56ff8cd31ca0cde387e8" Nov 24 18:45:20 crc kubenswrapper[4768]: E1124 18:45:20.958278 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"810ea5fa12801656718aa55ecda0a2d9c796d11afbcc56ff8cd31ca0cde387e8\": container with ID starting with 810ea5fa12801656718aa55ecda0a2d9c796d11afbcc56ff8cd31ca0cde387e8 not found: ID does not exist" containerID="810ea5fa12801656718aa55ecda0a2d9c796d11afbcc56ff8cd31ca0cde387e8" Nov 24 18:45:20 crc kubenswrapper[4768]: I1124 18:45:20.958323 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810ea5fa12801656718aa55ecda0a2d9c796d11afbcc56ff8cd31ca0cde387e8"} err="failed to get container status \"810ea5fa12801656718aa55ecda0a2d9c796d11afbcc56ff8cd31ca0cde387e8\": rpc error: code = NotFound desc = could not find container \"810ea5fa12801656718aa55ecda0a2d9c796d11afbcc56ff8cd31ca0cde387e8\": container with ID starting with 810ea5fa12801656718aa55ecda0a2d9c796d11afbcc56ff8cd31ca0cde387e8 not found: ID does not exist" Nov 24 18:45:21 crc kubenswrapper[4768]: I1124 18:45:21.936566 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" path="/var/lib/kubelet/pods/6390ae24-7ebe-4ccf-b29f-ccd53d22dd57/volumes" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.242806 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 18:45:31 crc kubenswrapper[4768]: E1124 18:45:31.243686 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" containerName="extract-utilities" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.243701 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" containerName="extract-utilities" Nov 24 18:45:31 crc kubenswrapper[4768]: E1124 18:45:31.243723 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" containerName="extract-content" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.243730 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" containerName="extract-content" Nov 24 18:45:31 crc kubenswrapper[4768]: E1124 18:45:31.243740 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" containerName="registry-server" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.243747 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" containerName="registry-server" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.243927 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6390ae24-7ebe-4ccf-b29f-ccd53d22dd57" containerName="registry-server" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.244689 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.248147 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.248390 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.249954 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-jnvrb" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.251538 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.255922 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.392954 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9qhq\" (UniqueName: \"kubernetes.io/projected/a70c965c-d29f-4286-b2e4-a580073783c5-kube-api-access-l9qhq\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.393007 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a70c965c-d29f-4286-b2e4-a580073783c5-config-data\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.393041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.393138 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a70c965c-d29f-4286-b2e4-a580073783c5-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.393157 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.393191 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a70c965c-d29f-4286-b2e4-a580073783c5-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.393398 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.393462 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a70c965c-d29f-4286-b2e4-a580073783c5-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.393678 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.496158 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.496227 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9qhq\" (UniqueName: \"kubernetes.io/projected/a70c965c-d29f-4286-b2e4-a580073783c5-kube-api-access-l9qhq\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.496256 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a70c965c-d29f-4286-b2e4-a580073783c5-config-data\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.496294 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.496356 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a70c965c-d29f-4286-b2e4-a580073783c5-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.496381 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.496414 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a70c965c-d29f-4286-b2e4-a580073783c5-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.496479 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.496528 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a70c965c-d29f-4286-b2e4-a580073783c5-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.497442 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a70c965c-d29f-4286-b2e4-a580073783c5-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.497582 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.497621 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a70c965c-d29f-4286-b2e4-a580073783c5-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.498457 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a70c965c-d29f-4286-b2e4-a580073783c5-config-data\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.498467 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a70c965c-d29f-4286-b2e4-a580073783c5-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.506171 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.512582 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.517898 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.527146 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9qhq\" (UniqueName: \"kubernetes.io/projected/a70c965c-d29f-4286-b2e4-a580073783c5-kube-api-access-l9qhq\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.544049 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " pod="openstack/tempest-tests-tempest" Nov 24 18:45:31 crc kubenswrapper[4768]: I1124 18:45:31.563689 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 18:45:32 crc kubenswrapper[4768]: I1124 18:45:32.022541 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 18:45:32 crc kubenswrapper[4768]: I1124 18:45:32.957914 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a70c965c-d29f-4286-b2e4-a580073783c5","Type":"ContainerStarted","Data":"00828831a17cbd63b77e19145daefeee2812f6806f3947fcbe561ebb9b65c9f6"} Nov 24 18:45:50 crc kubenswrapper[4768]: I1124 18:45:50.484178 4768 scope.go:117] "RemoveContainer" containerID="cb909740952a161b457634ca02e7f3bd50236f6792045ef3c443c4c3877a5c9e" Nov 24 18:46:00 crc kubenswrapper[4768]: E1124 18:46:00.248136 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 24 18:46:00 crc kubenswrapper[4768]: E1124 18:46:00.252374 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9qhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a70c965c-d29f-4286-b2e4-a580073783c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 18:46:00 crc kubenswrapper[4768]: E1124 18:46:00.254196 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a70c965c-d29f-4286-b2e4-a580073783c5" Nov 24 18:46:00 crc kubenswrapper[4768]: E1124 18:46:00.268120 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a70c965c-d29f-4286-b2e4-a580073783c5" Nov 24 18:46:13 crc kubenswrapper[4768]: I1124 18:46:13.656192 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:46:13 crc kubenswrapper[4768]: I1124 18:46:13.656999 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:46:17 crc kubenswrapper[4768]: I1124 18:46:17.470345 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a70c965c-d29f-4286-b2e4-a580073783c5","Type":"ContainerStarted","Data":"65bd01e4950f6c8002373c04311da806d25d2ab415e4f6a011604052e65ef37d"} Nov 24 18:46:17 crc kubenswrapper[4768]: I1124 18:46:17.499754 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.162975945 podStartE2EDuration="47.499730653s" podCreationTimestamp="2025-11-24 18:45:30 +0000 UTC" firstStartedPulling="2025-11-24 18:45:32.031334571 +0000 UTC m=+3370.891916358" lastFinishedPulling="2025-11-24 18:46:15.368089279 +0000 UTC m=+3414.228671066" observedRunningTime="2025-11-24 18:46:17.491056883 +0000 UTC m=+3416.351638670" watchObservedRunningTime="2025-11-24 18:46:17.499730653 +0000 UTC m=+3416.360312440" Nov 24 18:46:43 crc kubenswrapper[4768]: I1124 18:46:43.656534 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:46:43 crc kubenswrapper[4768]: I1124 18:46:43.657266 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:47:13 crc kubenswrapper[4768]: I1124 18:47:13.656113 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:47:13 crc kubenswrapper[4768]: I1124 18:47:13.657338 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:47:13 crc kubenswrapper[4768]: I1124 18:47:13.657413 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 18:47:13 crc kubenswrapper[4768]: I1124 18:47:13.658239 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9eb401ad1b0ef5f0f1ac2c17170a6fef38691d5809ef5a0b3d5a468c793d8b00"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 18:47:13 crc kubenswrapper[4768]: I1124 18:47:13.658305 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://9eb401ad1b0ef5f0f1ac2c17170a6fef38691d5809ef5a0b3d5a468c793d8b00" gracePeriod=600 Nov 24 18:47:14 crc kubenswrapper[4768]: I1124 18:47:14.167729 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"9eb401ad1b0ef5f0f1ac2c17170a6fef38691d5809ef5a0b3d5a468c793d8b00"} Nov 24 18:47:14 crc kubenswrapper[4768]: I1124 18:47:14.168258 4768 scope.go:117] "RemoveContainer" containerID="38072d948ec566428c73e607e6cfcd4ee55a549feb1e80fbab8061d2948adb3a" Nov 24 18:47:14 crc kubenswrapper[4768]: I1124 18:47:14.167684 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="9eb401ad1b0ef5f0f1ac2c17170a6fef38691d5809ef5a0b3d5a468c793d8b00" exitCode=0 Nov 24 18:47:14 crc kubenswrapper[4768]: I1124 18:47:14.168555 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b"} Nov 24 18:49:43 crc kubenswrapper[4768]: I1124 18:49:43.656343 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:49:43 crc kubenswrapper[4768]: I1124 18:49:43.657408 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:50:13 crc kubenswrapper[4768]: I1124 18:50:13.657080 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:50:13 crc kubenswrapper[4768]: I1124 18:50:13.657767 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:50:33 crc kubenswrapper[4768]: I1124 18:50:33.502155 4768 generic.go:334] "Generic (PLEG): container finished" podID="a70c965c-d29f-4286-b2e4-a580073783c5" containerID="65bd01e4950f6c8002373c04311da806d25d2ab415e4f6a011604052e65ef37d" exitCode=0 Nov 24 18:50:33 crc kubenswrapper[4768]: I1124 18:50:33.502268 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a70c965c-d29f-4286-b2e4-a580073783c5","Type":"ContainerDied","Data":"65bd01e4950f6c8002373c04311da806d25d2ab415e4f6a011604052e65ef37d"} Nov 24 18:50:34 crc kubenswrapper[4768]: I1124 18:50:34.932012 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.005932 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a70c965c-d29f-4286-b2e4-a580073783c5\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.006054 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a70c965c-d29f-4286-b2e4-a580073783c5-config-data\") pod \"a70c965c-d29f-4286-b2e4-a580073783c5\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.006199 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-ca-certs\") pod \"a70c965c-d29f-4286-b2e4-a580073783c5\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.006265 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a70c965c-d29f-4286-b2e4-a580073783c5-test-operator-ephemeral-workdir\") pod \"a70c965c-d29f-4286-b2e4-a580073783c5\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.006344 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-openstack-config-secret\") pod \"a70c965c-d29f-4286-b2e4-a580073783c5\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.006379 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9qhq\" (UniqueName: \"kubernetes.io/projected/a70c965c-d29f-4286-b2e4-a580073783c5-kube-api-access-l9qhq\") pod \"a70c965c-d29f-4286-b2e4-a580073783c5\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.006421 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a70c965c-d29f-4286-b2e4-a580073783c5-test-operator-ephemeral-temporary\") pod \"a70c965c-d29f-4286-b2e4-a580073783c5\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.006463 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a70c965c-d29f-4286-b2e4-a580073783c5-openstack-config\") pod \"a70c965c-d29f-4286-b2e4-a580073783c5\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.007604 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a70c965c-d29f-4286-b2e4-a580073783c5-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a70c965c-d29f-4286-b2e4-a580073783c5" (UID: "a70c965c-d29f-4286-b2e4-a580073783c5"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.007969 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70c965c-d29f-4286-b2e4-a580073783c5-config-data" (OuterVolumeSpecName: "config-data") pod "a70c965c-d29f-4286-b2e4-a580073783c5" (UID: "a70c965c-d29f-4286-b2e4-a580073783c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.012892 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a70c965c-d29f-4286-b2e4-a580073783c5" (UID: "a70c965c-d29f-4286-b2e4-a580073783c5"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.015246 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a70c965c-d29f-4286-b2e4-a580073783c5-kube-api-access-l9qhq" (OuterVolumeSpecName: "kube-api-access-l9qhq") pod "a70c965c-d29f-4286-b2e4-a580073783c5" (UID: "a70c965c-d29f-4286-b2e4-a580073783c5"). InnerVolumeSpecName "kube-api-access-l9qhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.017933 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a70c965c-d29f-4286-b2e4-a580073783c5-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a70c965c-d29f-4286-b2e4-a580073783c5" (UID: "a70c965c-d29f-4286-b2e4-a580073783c5"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.040232 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a70c965c-d29f-4286-b2e4-a580073783c5" (UID: "a70c965c-d29f-4286-b2e4-a580073783c5"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.040851 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a70c965c-d29f-4286-b2e4-a580073783c5" (UID: "a70c965c-d29f-4286-b2e4-a580073783c5"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.060508 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70c965c-d29f-4286-b2e4-a580073783c5-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a70c965c-d29f-4286-b2e4-a580073783c5" (UID: "a70c965c-d29f-4286-b2e4-a580073783c5"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.109048 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-ssh-key\") pod \"a70c965c-d29f-4286-b2e4-a580073783c5\" (UID: \"a70c965c-d29f-4286-b2e4-a580073783c5\") " Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.110603 4768 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.110642 4768 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a70c965c-d29f-4286-b2e4-a580073783c5-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.110663 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.110682 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9qhq\" (UniqueName: \"kubernetes.io/projected/a70c965c-d29f-4286-b2e4-a580073783c5-kube-api-access-l9qhq\") on node \"crc\" DevicePath \"\"" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.110696 4768 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a70c965c-d29f-4286-b2e4-a580073783c5-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.110711 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a70c965c-d29f-4286-b2e4-a580073783c5-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.110758 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.110774 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a70c965c-d29f-4286-b2e4-a580073783c5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.136992 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.137327 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a70c965c-d29f-4286-b2e4-a580073783c5" (UID: "a70c965c-d29f-4286-b2e4-a580073783c5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.213385 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a70c965c-d29f-4286-b2e4-a580073783c5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.213429 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.531332 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a70c965c-d29f-4286-b2e4-a580073783c5","Type":"ContainerDied","Data":"00828831a17cbd63b77e19145daefeee2812f6806f3947fcbe561ebb9b65c9f6"} Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.531415 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 18:50:35 crc kubenswrapper[4768]: I1124 18:50:35.531434 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00828831a17cbd63b77e19145daefeee2812f6806f3947fcbe561ebb9b65c9f6" Nov 24 18:50:43 crc kubenswrapper[4768]: I1124 18:50:43.656565 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:50:43 crc kubenswrapper[4768]: I1124 18:50:43.657476 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:50:43 crc kubenswrapper[4768]: I1124 18:50:43.657559 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 18:50:43 crc kubenswrapper[4768]: I1124 18:50:43.658446 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 18:50:43 crc kubenswrapper[4768]: I1124 18:50:43.658533 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" gracePeriod=600 Nov 24 18:50:43 crc kubenswrapper[4768]: E1124 18:50:43.783673 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:50:44 crc kubenswrapper[4768]: I1124 18:50:44.656211 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" exitCode=0 Nov 24 18:50:44 crc kubenswrapper[4768]: I1124 18:50:44.656266 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b"} Nov 24 18:50:44 crc kubenswrapper[4768]: I1124 18:50:44.656311 4768 scope.go:117] "RemoveContainer" containerID="9eb401ad1b0ef5f0f1ac2c17170a6fef38691d5809ef5a0b3d5a468c793d8b00" Nov 24 18:50:44 crc kubenswrapper[4768]: I1124 18:50:44.656984 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:50:44 crc kubenswrapper[4768]: E1124 18:50:44.657275 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:50:44 crc kubenswrapper[4768]: I1124 18:50:44.971048 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 18:50:44 crc kubenswrapper[4768]: E1124 18:50:44.971871 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70c965c-d29f-4286-b2e4-a580073783c5" containerName="tempest-tests-tempest-tests-runner" Nov 24 18:50:44 crc kubenswrapper[4768]: I1124 18:50:44.971891 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70c965c-d29f-4286-b2e4-a580073783c5" containerName="tempest-tests-tempest-tests-runner" Nov 24 18:50:44 crc kubenswrapper[4768]: I1124 18:50:44.972166 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70c965c-d29f-4286-b2e4-a580073783c5" containerName="tempest-tests-tempest-tests-runner" Nov 24 18:50:44 crc kubenswrapper[4768]: I1124 18:50:44.972866 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 18:50:44 crc kubenswrapper[4768]: I1124 18:50:44.975871 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-jnvrb" Nov 24 18:50:44 crc kubenswrapper[4768]: I1124 18:50:44.985623 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 18:50:45 crc kubenswrapper[4768]: I1124 18:50:45.152695 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjqg8\" (UniqueName: \"kubernetes.io/projected/40331542-20c7-4f93-8571-cc1bcaad9d48-kube-api-access-cjqg8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40331542-20c7-4f93-8571-cc1bcaad9d48\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 18:50:45 crc kubenswrapper[4768]: I1124 18:50:45.152829 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40331542-20c7-4f93-8571-cc1bcaad9d48\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 18:50:45 crc kubenswrapper[4768]: I1124 18:50:45.254814 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40331542-20c7-4f93-8571-cc1bcaad9d48\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 18:50:45 crc kubenswrapper[4768]: I1124 18:50:45.255156 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjqg8\" (UniqueName: \"kubernetes.io/projected/40331542-20c7-4f93-8571-cc1bcaad9d48-kube-api-access-cjqg8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40331542-20c7-4f93-8571-cc1bcaad9d48\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 18:50:45 crc kubenswrapper[4768]: I1124 18:50:45.255637 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40331542-20c7-4f93-8571-cc1bcaad9d48\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 18:50:45 crc kubenswrapper[4768]: I1124 18:50:45.277813 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjqg8\" (UniqueName: \"kubernetes.io/projected/40331542-20c7-4f93-8571-cc1bcaad9d48-kube-api-access-cjqg8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40331542-20c7-4f93-8571-cc1bcaad9d48\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 18:50:45 crc kubenswrapper[4768]: I1124 18:50:45.293757 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40331542-20c7-4f93-8571-cc1bcaad9d48\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 18:50:45 crc kubenswrapper[4768]: I1124 18:50:45.315755 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 18:50:45 crc kubenswrapper[4768]: I1124 18:50:45.834156 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 18:50:45 crc kubenswrapper[4768]: W1124 18:50:45.850880 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40331542_20c7_4f93_8571_cc1bcaad9d48.slice/crio-7437e3fc1004c00002656238c8744db9d08ea58dd0afab913f38781a3845d46f WatchSource:0}: Error finding container 7437e3fc1004c00002656238c8744db9d08ea58dd0afab913f38781a3845d46f: Status 404 returned error can't find the container with id 7437e3fc1004c00002656238c8744db9d08ea58dd0afab913f38781a3845d46f Nov 24 18:50:45 crc kubenswrapper[4768]: I1124 18:50:45.854091 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 18:50:46 crc kubenswrapper[4768]: I1124 18:50:46.685408 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"40331542-20c7-4f93-8571-cc1bcaad9d48","Type":"ContainerStarted","Data":"7437e3fc1004c00002656238c8744db9d08ea58dd0afab913f38781a3845d46f"} Nov 24 18:50:47 crc kubenswrapper[4768]: I1124 18:50:47.701945 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"40331542-20c7-4f93-8571-cc1bcaad9d48","Type":"ContainerStarted","Data":"1702da90e52b6c5c8d9e70d3ece1ef6c5be007f3a3c7050712396d50485d1652"} Nov 24 18:50:47 crc kubenswrapper[4768]: I1124 18:50:47.724459 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.89355841 podStartE2EDuration="3.724429601s" podCreationTimestamp="2025-11-24 18:50:44 +0000 UTC" firstStartedPulling="2025-11-24 18:50:45.853887561 +0000 UTC m=+3684.714469338" lastFinishedPulling="2025-11-24 18:50:46.684758752 +0000 UTC m=+3685.545340529" observedRunningTime="2025-11-24 18:50:47.721932124 +0000 UTC m=+3686.582513901" watchObservedRunningTime="2025-11-24 18:50:47.724429601 +0000 UTC m=+3686.585011418" Nov 24 18:50:57 crc kubenswrapper[4768]: I1124 18:50:57.898682 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:50:57 crc kubenswrapper[4768]: E1124 18:50:57.899876 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:51:00 crc kubenswrapper[4768]: I1124 18:51:00.290604 4768 scope.go:117] "RemoveContainer" containerID="4365912c8ccf374969fc839a7bbbb4eef2abdb0b2cdbbbae1de1d79ceaf00d7e" Nov 24 18:51:00 crc kubenswrapper[4768]: I1124 18:51:00.331053 4768 scope.go:117] "RemoveContainer" containerID="4c85d97719047fb6105ee424a052ad3b0b1d3c76f580071a39ff834dcfcdb5df" Nov 24 18:51:10 crc kubenswrapper[4768]: I1124 18:51:10.898360 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:51:10 crc kubenswrapper[4768]: E1124 18:51:10.899985 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.630266 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-thnqt/must-gather-ss45m"] Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.632156 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/must-gather-ss45m" Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.634508 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-thnqt"/"openshift-service-ca.crt" Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.634736 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-thnqt"/"kube-root-ca.crt" Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.636644 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-thnqt"/"default-dockercfg-sxwf4" Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.639322 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-thnqt/must-gather-ss45m"] Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.747283 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f538644d-3393-4e2f-9df8-8e2ca7c01444-must-gather-output\") pod \"must-gather-ss45m\" (UID: \"f538644d-3393-4e2f-9df8-8e2ca7c01444\") " pod="openshift-must-gather-thnqt/must-gather-ss45m" Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.747471 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v6dv\" (UniqueName: \"kubernetes.io/projected/f538644d-3393-4e2f-9df8-8e2ca7c01444-kube-api-access-6v6dv\") pod \"must-gather-ss45m\" (UID: \"f538644d-3393-4e2f-9df8-8e2ca7c01444\") " pod="openshift-must-gather-thnqt/must-gather-ss45m" Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.848878 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f538644d-3393-4e2f-9df8-8e2ca7c01444-must-gather-output\") pod \"must-gather-ss45m\" (UID: \"f538644d-3393-4e2f-9df8-8e2ca7c01444\") " pod="openshift-must-gather-thnqt/must-gather-ss45m" Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.849094 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v6dv\" (UniqueName: \"kubernetes.io/projected/f538644d-3393-4e2f-9df8-8e2ca7c01444-kube-api-access-6v6dv\") pod \"must-gather-ss45m\" (UID: \"f538644d-3393-4e2f-9df8-8e2ca7c01444\") " pod="openshift-must-gather-thnqt/must-gather-ss45m" Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.849337 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f538644d-3393-4e2f-9df8-8e2ca7c01444-must-gather-output\") pod \"must-gather-ss45m\" (UID: \"f538644d-3393-4e2f-9df8-8e2ca7c01444\") " pod="openshift-must-gather-thnqt/must-gather-ss45m" Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.877534 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v6dv\" (UniqueName: \"kubernetes.io/projected/f538644d-3393-4e2f-9df8-8e2ca7c01444-kube-api-access-6v6dv\") pod \"must-gather-ss45m\" (UID: \"f538644d-3393-4e2f-9df8-8e2ca7c01444\") " pod="openshift-must-gather-thnqt/must-gather-ss45m" Nov 24 18:51:13 crc kubenswrapper[4768]: I1124 18:51:13.952015 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/must-gather-ss45m" Nov 24 18:51:14 crc kubenswrapper[4768]: I1124 18:51:14.509391 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-thnqt/must-gather-ss45m"] Nov 24 18:51:15 crc kubenswrapper[4768]: I1124 18:51:15.033595 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thnqt/must-gather-ss45m" event={"ID":"f538644d-3393-4e2f-9df8-8e2ca7c01444","Type":"ContainerStarted","Data":"5a993cf82699211ff90eebe67db90eac5e92016b63d89e62afc9e67583d37839"} Nov 24 18:51:19 crc kubenswrapper[4768]: I1124 18:51:19.072935 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thnqt/must-gather-ss45m" event={"ID":"f538644d-3393-4e2f-9df8-8e2ca7c01444","Type":"ContainerStarted","Data":"a5c05b8d9734f0d0ea9da0d405cecab458aa964b5c609cf93b86d474e771d876"} Nov 24 18:51:19 crc kubenswrapper[4768]: I1124 18:51:19.073439 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thnqt/must-gather-ss45m" event={"ID":"f538644d-3393-4e2f-9df8-8e2ca7c01444","Type":"ContainerStarted","Data":"29a5c304d31b388157e5fcd8ed984eb5fa9e9c1459b6604e0fc08a6ce551bd19"} Nov 24 18:51:19 crc kubenswrapper[4768]: I1124 18:51:19.093918 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-thnqt/must-gather-ss45m" podStartSLOduration=2.413574148 podStartE2EDuration="6.093900049s" podCreationTimestamp="2025-11-24 18:51:13 +0000 UTC" firstStartedPulling="2025-11-24 18:51:14.512915149 +0000 UTC m=+3713.373496926" lastFinishedPulling="2025-11-24 18:51:18.19324105 +0000 UTC m=+3717.053822827" observedRunningTime="2025-11-24 18:51:19.086124083 +0000 UTC m=+3717.946705860" watchObservedRunningTime="2025-11-24 18:51:19.093900049 +0000 UTC m=+3717.954481816" Nov 24 18:51:22 crc kubenswrapper[4768]: I1124 18:51:22.431430 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-thnqt/crc-debug-nj59v"] Nov 24 18:51:22 crc kubenswrapper[4768]: I1124 18:51:22.434365 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/crc-debug-nj59v" Nov 24 18:51:22 crc kubenswrapper[4768]: I1124 18:51:22.565824 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8902632-2a5d-400c-be67-3fa6b31150e4-host\") pod \"crc-debug-nj59v\" (UID: \"d8902632-2a5d-400c-be67-3fa6b31150e4\") " pod="openshift-must-gather-thnqt/crc-debug-nj59v" Nov 24 18:51:22 crc kubenswrapper[4768]: I1124 18:51:22.566178 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q28tl\" (UniqueName: \"kubernetes.io/projected/d8902632-2a5d-400c-be67-3fa6b31150e4-kube-api-access-q28tl\") pod \"crc-debug-nj59v\" (UID: \"d8902632-2a5d-400c-be67-3fa6b31150e4\") " pod="openshift-must-gather-thnqt/crc-debug-nj59v" Nov 24 18:51:22 crc kubenswrapper[4768]: I1124 18:51:22.668516 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8902632-2a5d-400c-be67-3fa6b31150e4-host\") pod \"crc-debug-nj59v\" (UID: \"d8902632-2a5d-400c-be67-3fa6b31150e4\") " pod="openshift-must-gather-thnqt/crc-debug-nj59v" Nov 24 18:51:22 crc kubenswrapper[4768]: I1124 18:51:22.668602 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q28tl\" (UniqueName: \"kubernetes.io/projected/d8902632-2a5d-400c-be67-3fa6b31150e4-kube-api-access-q28tl\") pod \"crc-debug-nj59v\" (UID: \"d8902632-2a5d-400c-be67-3fa6b31150e4\") " pod="openshift-must-gather-thnqt/crc-debug-nj59v" Nov 24 18:51:22 crc kubenswrapper[4768]: I1124 18:51:22.668694 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8902632-2a5d-400c-be67-3fa6b31150e4-host\") pod \"crc-debug-nj59v\" (UID: \"d8902632-2a5d-400c-be67-3fa6b31150e4\") " pod="openshift-must-gather-thnqt/crc-debug-nj59v" Nov 24 18:51:22 crc kubenswrapper[4768]: I1124 18:51:22.688690 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q28tl\" (UniqueName: \"kubernetes.io/projected/d8902632-2a5d-400c-be67-3fa6b31150e4-kube-api-access-q28tl\") pod \"crc-debug-nj59v\" (UID: \"d8902632-2a5d-400c-be67-3fa6b31150e4\") " pod="openshift-must-gather-thnqt/crc-debug-nj59v" Nov 24 18:51:22 crc kubenswrapper[4768]: I1124 18:51:22.752858 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/crc-debug-nj59v" Nov 24 18:51:22 crc kubenswrapper[4768]: W1124 18:51:22.794775 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8902632_2a5d_400c_be67_3fa6b31150e4.slice/crio-d04413459334a31082216f1398cf72a85daadd3c229dcd24644ca867018169bb WatchSource:0}: Error finding container d04413459334a31082216f1398cf72a85daadd3c229dcd24644ca867018169bb: Status 404 returned error can't find the container with id d04413459334a31082216f1398cf72a85daadd3c229dcd24644ca867018169bb Nov 24 18:51:23 crc kubenswrapper[4768]: I1124 18:51:23.123107 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thnqt/crc-debug-nj59v" event={"ID":"d8902632-2a5d-400c-be67-3fa6b31150e4","Type":"ContainerStarted","Data":"d04413459334a31082216f1398cf72a85daadd3c229dcd24644ca867018169bb"} Nov 24 18:51:24 crc kubenswrapper[4768]: I1124 18:51:24.898622 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:51:24 crc kubenswrapper[4768]: E1124 18:51:24.899149 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:51:32 crc kubenswrapper[4768]: I1124 18:51:32.216840 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thnqt/crc-debug-nj59v" event={"ID":"d8902632-2a5d-400c-be67-3fa6b31150e4","Type":"ContainerStarted","Data":"03ce9086b6bb5dcf65db26ccd5c0a3213521e55943c70fe898f7a183b0881743"} Nov 24 18:51:32 crc kubenswrapper[4768]: I1124 18:51:32.241698 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-thnqt/crc-debug-nj59v" podStartSLOduration=1.064358122 podStartE2EDuration="10.241666657s" podCreationTimestamp="2025-11-24 18:51:22 +0000 UTC" firstStartedPulling="2025-11-24 18:51:22.800378862 +0000 UTC m=+3721.660960639" lastFinishedPulling="2025-11-24 18:51:31.977687397 +0000 UTC m=+3730.838269174" observedRunningTime="2025-11-24 18:51:32.233532821 +0000 UTC m=+3731.094114618" watchObservedRunningTime="2025-11-24 18:51:32.241666657 +0000 UTC m=+3731.102248434" Nov 24 18:51:38 crc kubenswrapper[4768]: I1124 18:51:38.898989 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:51:38 crc kubenswrapper[4768]: E1124 18:51:38.899780 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:51:52 crc kubenswrapper[4768]: I1124 18:51:52.898516 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:51:52 crc kubenswrapper[4768]: E1124 18:51:52.899232 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:52:04 crc kubenswrapper[4768]: I1124 18:52:04.898620 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:52:04 crc kubenswrapper[4768]: E1124 18:52:04.899580 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:52:16 crc kubenswrapper[4768]: I1124 18:52:16.683465 4768 generic.go:334] "Generic (PLEG): container finished" podID="d8902632-2a5d-400c-be67-3fa6b31150e4" containerID="03ce9086b6bb5dcf65db26ccd5c0a3213521e55943c70fe898f7a183b0881743" exitCode=0 Nov 24 18:52:16 crc kubenswrapper[4768]: I1124 18:52:16.683575 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thnqt/crc-debug-nj59v" event={"ID":"d8902632-2a5d-400c-be67-3fa6b31150e4","Type":"ContainerDied","Data":"03ce9086b6bb5dcf65db26ccd5c0a3213521e55943c70fe898f7a183b0881743"} Nov 24 18:52:17 crc kubenswrapper[4768]: I1124 18:52:17.797010 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/crc-debug-nj59v" Nov 24 18:52:17 crc kubenswrapper[4768]: I1124 18:52:17.828210 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8902632-2a5d-400c-be67-3fa6b31150e4-host\") pod \"d8902632-2a5d-400c-be67-3fa6b31150e4\" (UID: \"d8902632-2a5d-400c-be67-3fa6b31150e4\") " Nov 24 18:52:17 crc kubenswrapper[4768]: I1124 18:52:17.828652 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q28tl\" (UniqueName: \"kubernetes.io/projected/d8902632-2a5d-400c-be67-3fa6b31150e4-kube-api-access-q28tl\") pod \"d8902632-2a5d-400c-be67-3fa6b31150e4\" (UID: \"d8902632-2a5d-400c-be67-3fa6b31150e4\") " Nov 24 18:52:17 crc kubenswrapper[4768]: I1124 18:52:17.828327 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8902632-2a5d-400c-be67-3fa6b31150e4-host" (OuterVolumeSpecName: "host") pod "d8902632-2a5d-400c-be67-3fa6b31150e4" (UID: "d8902632-2a5d-400c-be67-3fa6b31150e4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:52:17 crc kubenswrapper[4768]: I1124 18:52:17.829550 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8902632-2a5d-400c-be67-3fa6b31150e4-host\") on node \"crc\" DevicePath \"\"" Nov 24 18:52:17 crc kubenswrapper[4768]: I1124 18:52:17.836903 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8902632-2a5d-400c-be67-3fa6b31150e4-kube-api-access-q28tl" (OuterVolumeSpecName: "kube-api-access-q28tl") pod "d8902632-2a5d-400c-be67-3fa6b31150e4" (UID: "d8902632-2a5d-400c-be67-3fa6b31150e4"). InnerVolumeSpecName "kube-api-access-q28tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:52:17 crc kubenswrapper[4768]: I1124 18:52:17.847189 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-thnqt/crc-debug-nj59v"] Nov 24 18:52:17 crc kubenswrapper[4768]: I1124 18:52:17.862455 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-thnqt/crc-debug-nj59v"] Nov 24 18:52:17 crc kubenswrapper[4768]: I1124 18:52:17.911748 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8902632-2a5d-400c-be67-3fa6b31150e4" path="/var/lib/kubelet/pods/d8902632-2a5d-400c-be67-3fa6b31150e4/volumes" Nov 24 18:52:17 crc kubenswrapper[4768]: I1124 18:52:17.932313 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q28tl\" (UniqueName: \"kubernetes.io/projected/d8902632-2a5d-400c-be67-3fa6b31150e4-kube-api-access-q28tl\") on node \"crc\" DevicePath \"\"" Nov 24 18:52:18 crc kubenswrapper[4768]: I1124 18:52:18.715926 4768 scope.go:117] "RemoveContainer" containerID="03ce9086b6bb5dcf65db26ccd5c0a3213521e55943c70fe898f7a183b0881743" Nov 24 18:52:18 crc kubenswrapper[4768]: I1124 18:52:18.715955 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/crc-debug-nj59v" Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.014574 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-thnqt/crc-debug-rd722"] Nov 24 18:52:19 crc kubenswrapper[4768]: E1124 18:52:19.014979 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8902632-2a5d-400c-be67-3fa6b31150e4" containerName="container-00" Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.014991 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8902632-2a5d-400c-be67-3fa6b31150e4" containerName="container-00" Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.015161 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8902632-2a5d-400c-be67-3fa6b31150e4" containerName="container-00" Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.015794 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/crc-debug-rd722" Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.057604 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa4f7ce0-db10-4086-9d23-e1b14f032f78-host\") pod \"crc-debug-rd722\" (UID: \"aa4f7ce0-db10-4086-9d23-e1b14f032f78\") " pod="openshift-must-gather-thnqt/crc-debug-rd722" Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.057795 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xkx\" (UniqueName: \"kubernetes.io/projected/aa4f7ce0-db10-4086-9d23-e1b14f032f78-kube-api-access-67xkx\") pod \"crc-debug-rd722\" (UID: \"aa4f7ce0-db10-4086-9d23-e1b14f032f78\") " pod="openshift-must-gather-thnqt/crc-debug-rd722" Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.159552 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa4f7ce0-db10-4086-9d23-e1b14f032f78-host\") pod \"crc-debug-rd722\" (UID: \"aa4f7ce0-db10-4086-9d23-e1b14f032f78\") " pod="openshift-must-gather-thnqt/crc-debug-rd722" Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.159682 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67xkx\" (UniqueName: \"kubernetes.io/projected/aa4f7ce0-db10-4086-9d23-e1b14f032f78-kube-api-access-67xkx\") pod \"crc-debug-rd722\" (UID: \"aa4f7ce0-db10-4086-9d23-e1b14f032f78\") " pod="openshift-must-gather-thnqt/crc-debug-rd722" Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.159744 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa4f7ce0-db10-4086-9d23-e1b14f032f78-host\") pod \"crc-debug-rd722\" (UID: \"aa4f7ce0-db10-4086-9d23-e1b14f032f78\") " pod="openshift-must-gather-thnqt/crc-debug-rd722" Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.184732 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xkx\" (UniqueName: \"kubernetes.io/projected/aa4f7ce0-db10-4086-9d23-e1b14f032f78-kube-api-access-67xkx\") pod \"crc-debug-rd722\" (UID: \"aa4f7ce0-db10-4086-9d23-e1b14f032f78\") " pod="openshift-must-gather-thnqt/crc-debug-rd722" Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.333715 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/crc-debug-rd722" Nov 24 18:52:19 crc kubenswrapper[4768]: W1124 18:52:19.385446 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa4f7ce0_db10_4086_9d23_e1b14f032f78.slice/crio-d8af0dd9e9fd7d6485fd70f5ce1d0eecca9a1b622fbfe1e7f833f2af34703d22 WatchSource:0}: Error finding container d8af0dd9e9fd7d6485fd70f5ce1d0eecca9a1b622fbfe1e7f833f2af34703d22: Status 404 returned error can't find the container with id d8af0dd9e9fd7d6485fd70f5ce1d0eecca9a1b622fbfe1e7f833f2af34703d22 Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.729236 4768 generic.go:334] "Generic (PLEG): container finished" podID="aa4f7ce0-db10-4086-9d23-e1b14f032f78" containerID="acf43ca280f70d363974a6d7acce4d5b1a110e713584c7b276a00891a237aa17" exitCode=0 Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.729344 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thnqt/crc-debug-rd722" event={"ID":"aa4f7ce0-db10-4086-9d23-e1b14f032f78","Type":"ContainerDied","Data":"acf43ca280f70d363974a6d7acce4d5b1a110e713584c7b276a00891a237aa17"} Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.729686 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thnqt/crc-debug-rd722" event={"ID":"aa4f7ce0-db10-4086-9d23-e1b14f032f78","Type":"ContainerStarted","Data":"d8af0dd9e9fd7d6485fd70f5ce1d0eecca9a1b622fbfe1e7f833f2af34703d22"} Nov 24 18:52:19 crc kubenswrapper[4768]: I1124 18:52:19.898639 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:52:19 crc kubenswrapper[4768]: E1124 18:52:19.898958 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:52:20 crc kubenswrapper[4768]: I1124 18:52:20.303915 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-thnqt/crc-debug-rd722"] Nov 24 18:52:20 crc kubenswrapper[4768]: I1124 18:52:20.316872 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-thnqt/crc-debug-rd722"] Nov 24 18:52:20 crc kubenswrapper[4768]: I1124 18:52:20.853290 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/crc-debug-rd722" Nov 24 18:52:20 crc kubenswrapper[4768]: I1124 18:52:20.992633 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67xkx\" (UniqueName: \"kubernetes.io/projected/aa4f7ce0-db10-4086-9d23-e1b14f032f78-kube-api-access-67xkx\") pod \"aa4f7ce0-db10-4086-9d23-e1b14f032f78\" (UID: \"aa4f7ce0-db10-4086-9d23-e1b14f032f78\") " Nov 24 18:52:20 crc kubenswrapper[4768]: I1124 18:52:20.993167 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa4f7ce0-db10-4086-9d23-e1b14f032f78-host\") pod \"aa4f7ce0-db10-4086-9d23-e1b14f032f78\" (UID: \"aa4f7ce0-db10-4086-9d23-e1b14f032f78\") " Nov 24 18:52:20 crc kubenswrapper[4768]: I1124 18:52:20.993290 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4f7ce0-db10-4086-9d23-e1b14f032f78-host" (OuterVolumeSpecName: "host") pod "aa4f7ce0-db10-4086-9d23-e1b14f032f78" (UID: "aa4f7ce0-db10-4086-9d23-e1b14f032f78"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:52:20 crc kubenswrapper[4768]: I1124 18:52:20.994211 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa4f7ce0-db10-4086-9d23-e1b14f032f78-host\") on node \"crc\" DevicePath \"\"" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.001635 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa4f7ce0-db10-4086-9d23-e1b14f032f78-kube-api-access-67xkx" (OuterVolumeSpecName: "kube-api-access-67xkx") pod "aa4f7ce0-db10-4086-9d23-e1b14f032f78" (UID: "aa4f7ce0-db10-4086-9d23-e1b14f032f78"). InnerVolumeSpecName "kube-api-access-67xkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.096003 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67xkx\" (UniqueName: \"kubernetes.io/projected/aa4f7ce0-db10-4086-9d23-e1b14f032f78-kube-api-access-67xkx\") on node \"crc\" DevicePath \"\"" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.489936 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-thnqt/crc-debug-v5mww"] Nov 24 18:52:21 crc kubenswrapper[4768]: E1124 18:52:21.490389 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4f7ce0-db10-4086-9d23-e1b14f032f78" containerName="container-00" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.490406 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4f7ce0-db10-4086-9d23-e1b14f032f78" containerName="container-00" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.490665 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4f7ce0-db10-4086-9d23-e1b14f032f78" containerName="container-00" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.491460 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/crc-debug-v5mww" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.505719 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c06e6bcf-c471-4088-94aa-c6b197fffe42-host\") pod \"crc-debug-v5mww\" (UID: \"c06e6bcf-c471-4088-94aa-c6b197fffe42\") " pod="openshift-must-gather-thnqt/crc-debug-v5mww" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.505877 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf9nz\" (UniqueName: \"kubernetes.io/projected/c06e6bcf-c471-4088-94aa-c6b197fffe42-kube-api-access-mf9nz\") pod \"crc-debug-v5mww\" (UID: \"c06e6bcf-c471-4088-94aa-c6b197fffe42\") " pod="openshift-must-gather-thnqt/crc-debug-v5mww" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.608185 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c06e6bcf-c471-4088-94aa-c6b197fffe42-host\") pod \"crc-debug-v5mww\" (UID: \"c06e6bcf-c471-4088-94aa-c6b197fffe42\") " pod="openshift-must-gather-thnqt/crc-debug-v5mww" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.608293 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf9nz\" (UniqueName: \"kubernetes.io/projected/c06e6bcf-c471-4088-94aa-c6b197fffe42-kube-api-access-mf9nz\") pod \"crc-debug-v5mww\" (UID: \"c06e6bcf-c471-4088-94aa-c6b197fffe42\") " pod="openshift-must-gather-thnqt/crc-debug-v5mww" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.608418 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c06e6bcf-c471-4088-94aa-c6b197fffe42-host\") pod \"crc-debug-v5mww\" (UID: \"c06e6bcf-c471-4088-94aa-c6b197fffe42\") " pod="openshift-must-gather-thnqt/crc-debug-v5mww" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.634329 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf9nz\" (UniqueName: \"kubernetes.io/projected/c06e6bcf-c471-4088-94aa-c6b197fffe42-kube-api-access-mf9nz\") pod \"crc-debug-v5mww\" (UID: \"c06e6bcf-c471-4088-94aa-c6b197fffe42\") " pod="openshift-must-gather-thnqt/crc-debug-v5mww" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.753271 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8af0dd9e9fd7d6485fd70f5ce1d0eecca9a1b622fbfe1e7f833f2af34703d22" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.753383 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/crc-debug-rd722" Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.832715 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/crc-debug-v5mww" Nov 24 18:52:21 crc kubenswrapper[4768]: W1124 18:52:21.873771 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc06e6bcf_c471_4088_94aa_c6b197fffe42.slice/crio-bbe47d21df06845fd1d2b282a606208948df5c91b735dac4fe5085abb0ff68d6 WatchSource:0}: Error finding container bbe47d21df06845fd1d2b282a606208948df5c91b735dac4fe5085abb0ff68d6: Status 404 returned error can't find the container with id bbe47d21df06845fd1d2b282a606208948df5c91b735dac4fe5085abb0ff68d6 Nov 24 18:52:21 crc kubenswrapper[4768]: I1124 18:52:21.925681 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa4f7ce0-db10-4086-9d23-e1b14f032f78" path="/var/lib/kubelet/pods/aa4f7ce0-db10-4086-9d23-e1b14f032f78/volumes" Nov 24 18:52:22 crc kubenswrapper[4768]: I1124 18:52:22.766053 4768 generic.go:334] "Generic (PLEG): container finished" podID="c06e6bcf-c471-4088-94aa-c6b197fffe42" containerID="ed6db36215fb1b42f109c5fce3a9881188c509b6224819493d1781c769820ded" exitCode=0 Nov 24 18:52:22 crc kubenswrapper[4768]: I1124 18:52:22.766146 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thnqt/crc-debug-v5mww" event={"ID":"c06e6bcf-c471-4088-94aa-c6b197fffe42","Type":"ContainerDied","Data":"ed6db36215fb1b42f109c5fce3a9881188c509b6224819493d1781c769820ded"} Nov 24 18:52:22 crc kubenswrapper[4768]: I1124 18:52:22.766375 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thnqt/crc-debug-v5mww" event={"ID":"c06e6bcf-c471-4088-94aa-c6b197fffe42","Type":"ContainerStarted","Data":"bbe47d21df06845fd1d2b282a606208948df5c91b735dac4fe5085abb0ff68d6"} Nov 24 18:52:22 crc kubenswrapper[4768]: I1124 18:52:22.822552 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-thnqt/crc-debug-v5mww"] Nov 24 18:52:22 crc kubenswrapper[4768]: I1124 18:52:22.835217 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-thnqt/crc-debug-v5mww"] Nov 24 18:52:23 crc kubenswrapper[4768]: I1124 18:52:23.918766 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/crc-debug-v5mww" Nov 24 18:52:24 crc kubenswrapper[4768]: I1124 18:52:24.064173 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf9nz\" (UniqueName: \"kubernetes.io/projected/c06e6bcf-c471-4088-94aa-c6b197fffe42-kube-api-access-mf9nz\") pod \"c06e6bcf-c471-4088-94aa-c6b197fffe42\" (UID: \"c06e6bcf-c471-4088-94aa-c6b197fffe42\") " Nov 24 18:52:24 crc kubenswrapper[4768]: I1124 18:52:24.064652 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c06e6bcf-c471-4088-94aa-c6b197fffe42-host\") pod \"c06e6bcf-c471-4088-94aa-c6b197fffe42\" (UID: \"c06e6bcf-c471-4088-94aa-c6b197fffe42\") " Nov 24 18:52:24 crc kubenswrapper[4768]: I1124 18:52:24.064755 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c06e6bcf-c471-4088-94aa-c6b197fffe42-host" (OuterVolumeSpecName: "host") pod "c06e6bcf-c471-4088-94aa-c6b197fffe42" (UID: "c06e6bcf-c471-4088-94aa-c6b197fffe42"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 18:52:24 crc kubenswrapper[4768]: I1124 18:52:24.065995 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c06e6bcf-c471-4088-94aa-c6b197fffe42-host\") on node \"crc\" DevicePath \"\"" Nov 24 18:52:24 crc kubenswrapper[4768]: I1124 18:52:24.074338 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06e6bcf-c471-4088-94aa-c6b197fffe42-kube-api-access-mf9nz" (OuterVolumeSpecName: "kube-api-access-mf9nz") pod "c06e6bcf-c471-4088-94aa-c6b197fffe42" (UID: "c06e6bcf-c471-4088-94aa-c6b197fffe42"). InnerVolumeSpecName "kube-api-access-mf9nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:52:24 crc kubenswrapper[4768]: I1124 18:52:24.169640 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf9nz\" (UniqueName: \"kubernetes.io/projected/c06e6bcf-c471-4088-94aa-c6b197fffe42-kube-api-access-mf9nz\") on node \"crc\" DevicePath \"\"" Nov 24 18:52:24 crc kubenswrapper[4768]: I1124 18:52:24.795520 4768 scope.go:117] "RemoveContainer" containerID="ed6db36215fb1b42f109c5fce3a9881188c509b6224819493d1781c769820ded" Nov 24 18:52:24 crc kubenswrapper[4768]: I1124 18:52:24.795586 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/crc-debug-v5mww" Nov 24 18:52:25 crc kubenswrapper[4768]: I1124 18:52:25.918054 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06e6bcf-c471-4088-94aa-c6b197fffe42" path="/var/lib/kubelet/pods/c06e6bcf-c471-4088-94aa-c6b197fffe42/volumes" Nov 24 18:52:34 crc kubenswrapper[4768]: I1124 18:52:34.898649 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:52:34 crc kubenswrapper[4768]: E1124 18:52:34.899520 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:52:45 crc kubenswrapper[4768]: I1124 18:52:45.900549 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:52:45 crc kubenswrapper[4768]: E1124 18:52:45.901697 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.109950 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cbpmb"] Nov 24 18:52:52 crc kubenswrapper[4768]: E1124 18:52:52.111293 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06e6bcf-c471-4088-94aa-c6b197fffe42" containerName="container-00" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.111374 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06e6bcf-c471-4088-94aa-c6b197fffe42" containerName="container-00" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.111648 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c06e6bcf-c471-4088-94aa-c6b197fffe42" containerName="container-00" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.114160 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.122174 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbpmb"] Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.218048 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49zxm\" (UniqueName: \"kubernetes.io/projected/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-kube-api-access-49zxm\") pod \"redhat-marketplace-cbpmb\" (UID: \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\") " pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.218225 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-utilities\") pod \"redhat-marketplace-cbpmb\" (UID: \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\") " pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.218503 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-catalog-content\") pod \"redhat-marketplace-cbpmb\" (UID: \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\") " pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.320497 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-catalog-content\") pod \"redhat-marketplace-cbpmb\" (UID: \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\") " pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.320600 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49zxm\" (UniqueName: \"kubernetes.io/projected/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-kube-api-access-49zxm\") pod \"redhat-marketplace-cbpmb\" (UID: \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\") " pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.320667 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-utilities\") pod \"redhat-marketplace-cbpmb\" (UID: \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\") " pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.321323 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-utilities\") pod \"redhat-marketplace-cbpmb\" (UID: \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\") " pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.321354 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-catalog-content\") pod \"redhat-marketplace-cbpmb\" (UID: \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\") " pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.362341 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49zxm\" (UniqueName: \"kubernetes.io/projected/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-kube-api-access-49zxm\") pod \"redhat-marketplace-cbpmb\" (UID: \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\") " pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.443437 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:52:52 crc kubenswrapper[4768]: I1124 18:52:52.932456 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbpmb"] Nov 24 18:52:53 crc kubenswrapper[4768]: I1124 18:52:53.150143 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbpmb" event={"ID":"9bdf0f0a-88bf-4a26-a622-1815fdd3031f","Type":"ContainerStarted","Data":"6e5adfcedf911ca45d1d468fdfcc28bf10d887b791314e529f872e5b71c7d6bb"} Nov 24 18:52:54 crc kubenswrapper[4768]: I1124 18:52:54.161769 4768 generic.go:334] "Generic (PLEG): container finished" podID="9bdf0f0a-88bf-4a26-a622-1815fdd3031f" containerID="4a558cfba067ff63eb1ad54ee26f21c811ea5dd60c439226ecba98009b3cc30d" exitCode=0 Nov 24 18:52:54 crc kubenswrapper[4768]: I1124 18:52:54.161820 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbpmb" event={"ID":"9bdf0f0a-88bf-4a26-a622-1815fdd3031f","Type":"ContainerDied","Data":"4a558cfba067ff63eb1ad54ee26f21c811ea5dd60c439226ecba98009b3cc30d"} Nov 24 18:52:56 crc kubenswrapper[4768]: I1124 18:52:56.186438 4768 generic.go:334] "Generic (PLEG): container finished" podID="9bdf0f0a-88bf-4a26-a622-1815fdd3031f" containerID="a3ffe1eb9357fedcf86bd64c50c45640369e16b5f55b843d3f3223ce340f609f" exitCode=0 Nov 24 18:52:56 crc kubenswrapper[4768]: I1124 18:52:56.186544 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbpmb" event={"ID":"9bdf0f0a-88bf-4a26-a622-1815fdd3031f","Type":"ContainerDied","Data":"a3ffe1eb9357fedcf86bd64c50c45640369e16b5f55b843d3f3223ce340f609f"} Nov 24 18:52:58 crc kubenswrapper[4768]: I1124 18:52:58.218097 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbpmb" event={"ID":"9bdf0f0a-88bf-4a26-a622-1815fdd3031f","Type":"ContainerStarted","Data":"b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d"} Nov 24 18:52:58 crc kubenswrapper[4768]: I1124 18:52:58.252162 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cbpmb" podStartSLOduration=3.5013762489999998 podStartE2EDuration="6.252131599s" podCreationTimestamp="2025-11-24 18:52:52 +0000 UTC" firstStartedPulling="2025-11-24 18:52:54.164122068 +0000 UTC m=+3813.024703845" lastFinishedPulling="2025-11-24 18:52:56.914877418 +0000 UTC m=+3815.775459195" observedRunningTime="2025-11-24 18:52:58.238713988 +0000 UTC m=+3817.099295765" watchObservedRunningTime="2025-11-24 18:52:58.252131599 +0000 UTC m=+3817.112713376" Nov 24 18:52:59 crc kubenswrapper[4768]: I1124 18:52:59.898975 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:52:59 crc kubenswrapper[4768]: E1124 18:52:59.900142 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:53:02 crc kubenswrapper[4768]: I1124 18:53:02.444357 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:53:02 crc kubenswrapper[4768]: I1124 18:53:02.445117 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:53:02 crc kubenswrapper[4768]: I1124 18:53:02.506220 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:53:03 crc kubenswrapper[4768]: I1124 18:53:03.332846 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:53:03 crc kubenswrapper[4768]: I1124 18:53:03.400328 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbpmb"] Nov 24 18:53:05 crc kubenswrapper[4768]: I1124 18:53:05.298614 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cbpmb" podUID="9bdf0f0a-88bf-4a26-a622-1815fdd3031f" containerName="registry-server" containerID="cri-o://b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d" gracePeriod=2 Nov 24 18:53:05 crc kubenswrapper[4768]: I1124 18:53:05.718227 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7cbf4cbf68-zhhj4_22661dfe-b7e1-4894-ae13-dab13e09c845/barbican-api/0.log" Nov 24 18:53:05 crc kubenswrapper[4768]: I1124 18:53:05.840718 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:53:05 crc kubenswrapper[4768]: I1124 18:53:05.938922 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-utilities\") pod \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\" (UID: \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\") " Nov 24 18:53:05 crc kubenswrapper[4768]: I1124 18:53:05.938986 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-catalog-content\") pod \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\" (UID: \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\") " Nov 24 18:53:05 crc kubenswrapper[4768]: I1124 18:53:05.939091 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49zxm\" (UniqueName: \"kubernetes.io/projected/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-kube-api-access-49zxm\") pod \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\" (UID: \"9bdf0f0a-88bf-4a26-a622-1815fdd3031f\") " Nov 24 18:53:05 crc kubenswrapper[4768]: I1124 18:53:05.940150 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-utilities" (OuterVolumeSpecName: "utilities") pod "9bdf0f0a-88bf-4a26-a622-1815fdd3031f" (UID: "9bdf0f0a-88bf-4a26-a622-1815fdd3031f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:53:05 crc kubenswrapper[4768]: I1124 18:53:05.991213 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bdf0f0a-88bf-4a26-a622-1815fdd3031f" (UID: "9bdf0f0a-88bf-4a26-a622-1815fdd3031f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.041134 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.041167 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.079028 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7cbf4cbf68-zhhj4_22661dfe-b7e1-4894-ae13-dab13e09c845/barbican-api-log/0.log" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.314062 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbpmb" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.314153 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbpmb" event={"ID":"9bdf0f0a-88bf-4a26-a622-1815fdd3031f","Type":"ContainerDied","Data":"b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d"} Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.314239 4768 scope.go:117] "RemoveContainer" containerID="b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.314003 4768 generic.go:334] "Generic (PLEG): container finished" podID="9bdf0f0a-88bf-4a26-a622-1815fdd3031f" containerID="b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d" exitCode=0 Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.314840 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbpmb" event={"ID":"9bdf0f0a-88bf-4a26-a622-1815fdd3031f","Type":"ContainerDied","Data":"6e5adfcedf911ca45d1d468fdfcc28bf10d887b791314e529f872e5b71c7d6bb"} Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.342294 4768 scope.go:117] "RemoveContainer" containerID="a3ffe1eb9357fedcf86bd64c50c45640369e16b5f55b843d3f3223ce340f609f" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.607847 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-kube-api-access-49zxm" (OuterVolumeSpecName: "kube-api-access-49zxm") pod "9bdf0f0a-88bf-4a26-a622-1815fdd3031f" (UID: "9bdf0f0a-88bf-4a26-a622-1815fdd3031f"). InnerVolumeSpecName "kube-api-access-49zxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.637470 4768 scope.go:117] "RemoveContainer" containerID="4a558cfba067ff63eb1ad54ee26f21c811ea5dd60c439226ecba98009b3cc30d" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.651448 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49zxm\" (UniqueName: \"kubernetes.io/projected/9bdf0f0a-88bf-4a26-a622-1815fdd3031f-kube-api-access-49zxm\") on node \"crc\" DevicePath \"\"" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.754950 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbpmb"] Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.758202 4768 scope.go:117] "RemoveContainer" containerID="b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d" Nov 24 18:53:06 crc kubenswrapper[4768]: E1124 18:53:06.758764 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d\": container with ID starting with b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d not found: ID does not exist" containerID="b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.758811 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d"} err="failed to get container status \"b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d\": rpc error: code = NotFound desc = could not find container \"b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d\": container with ID starting with b73349a55f4515e75b51dfb0fac49f4576419b5bd42b922e2d014c030484ab9d not found: ID does not exist" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.758840 4768 scope.go:117] "RemoveContainer" containerID="a3ffe1eb9357fedcf86bd64c50c45640369e16b5f55b843d3f3223ce340f609f" Nov 24 18:53:06 crc kubenswrapper[4768]: E1124 18:53:06.759142 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3ffe1eb9357fedcf86bd64c50c45640369e16b5f55b843d3f3223ce340f609f\": container with ID starting with a3ffe1eb9357fedcf86bd64c50c45640369e16b5f55b843d3f3223ce340f609f not found: ID does not exist" containerID="a3ffe1eb9357fedcf86bd64c50c45640369e16b5f55b843d3f3223ce340f609f" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.759179 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ffe1eb9357fedcf86bd64c50c45640369e16b5f55b843d3f3223ce340f609f"} err="failed to get container status \"a3ffe1eb9357fedcf86bd64c50c45640369e16b5f55b843d3f3223ce340f609f\": rpc error: code = NotFound desc = could not find container \"a3ffe1eb9357fedcf86bd64c50c45640369e16b5f55b843d3f3223ce340f609f\": container with ID starting with a3ffe1eb9357fedcf86bd64c50c45640369e16b5f55b843d3f3223ce340f609f not found: ID does not exist" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.759202 4768 scope.go:117] "RemoveContainer" containerID="4a558cfba067ff63eb1ad54ee26f21c811ea5dd60c439226ecba98009b3cc30d" Nov 24 18:53:06 crc kubenswrapper[4768]: E1124 18:53:06.759627 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a558cfba067ff63eb1ad54ee26f21c811ea5dd60c439226ecba98009b3cc30d\": container with ID starting with 4a558cfba067ff63eb1ad54ee26f21c811ea5dd60c439226ecba98009b3cc30d not found: ID does not exist" containerID="4a558cfba067ff63eb1ad54ee26f21c811ea5dd60c439226ecba98009b3cc30d" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.759652 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a558cfba067ff63eb1ad54ee26f21c811ea5dd60c439226ecba98009b3cc30d"} err="failed to get container status \"4a558cfba067ff63eb1ad54ee26f21c811ea5dd60c439226ecba98009b3cc30d\": rpc error: code = NotFound desc = could not find container \"4a558cfba067ff63eb1ad54ee26f21c811ea5dd60c439226ecba98009b3cc30d\": container with ID starting with 4a558cfba067ff63eb1ad54ee26f21c811ea5dd60c439226ecba98009b3cc30d not found: ID does not exist" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.764511 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbpmb"] Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.837903 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-97698dcdb-54zqg_5cb6b015-ae5e-438f-9aec-c25982a2febc/barbican-keystone-listener/0.log" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.864891 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-97698dcdb-54zqg_5cb6b015-ae5e-438f-9aec-c25982a2febc/barbican-keystone-listener-log/0.log" Nov 24 18:53:06 crc kubenswrapper[4768]: I1124 18:53:06.872655 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b7d468cdf-9fjfm_b343e1cc-a6b5-4074-98b3-a4bddb9b2730/barbican-worker/0.log" Nov 24 18:53:07 crc kubenswrapper[4768]: I1124 18:53:07.042045 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b7d468cdf-9fjfm_b343e1cc-a6b5-4074-98b3-a4bddb9b2730/barbican-worker-log/0.log" Nov 24 18:53:07 crc kubenswrapper[4768]: I1124 18:53:07.114919 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b_0d74256a-a4fc-4ecf-a57c-09aa5686878b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:07 crc kubenswrapper[4768]: I1124 18:53:07.299035 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx_03a4429e-4032-4d71-adc7-7257ac152323/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:07 crc kubenswrapper[4768]: I1124 18:53:07.321946 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2_0938fce9-58c6-4933-aeb3-49e2fe28bf0f/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:07 crc kubenswrapper[4768]: I1124 18:53:07.533082 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d_ca59c4d5-5455-49a2-885e-d6e8eb3103fd/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:07 crc kubenswrapper[4768]: I1124 18:53:07.535981 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_81427e5e-c0e8-4445-8a60-2b5dcdcf9a52/ceilometer-central-agent/0.log" Nov 24 18:53:07 crc kubenswrapper[4768]: I1124 18:53:07.624442 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_81427e5e-c0e8-4445-8a60-2b5dcdcf9a52/ceilometer-notification-agent/0.log" Nov 24 18:53:07 crc kubenswrapper[4768]: I1124 18:53:07.718264 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_81427e5e-c0e8-4445-8a60-2b5dcdcf9a52/proxy-httpd/0.log" Nov 24 18:53:07 crc kubenswrapper[4768]: I1124 18:53:07.722183 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_81427e5e-c0e8-4445-8a60-2b5dcdcf9a52/sg-core/0.log" Nov 24 18:53:07 crc kubenswrapper[4768]: I1124 18:53:07.825617 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8_d974ce0f-88e9-465d-9c74-6a7531593c4b/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:07 crc kubenswrapper[4768]: I1124 18:53:07.909729 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bdf0f0a-88bf-4a26-a622-1815fdd3031f" path="/var/lib/kubelet/pods/9bdf0f0a-88bf-4a26-a622-1815fdd3031f/volumes" Nov 24 18:53:07 crc kubenswrapper[4768]: I1124 18:53:07.945113 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf_2d65345f-930f-4b71-9968-a613d7c11a33/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:08 crc kubenswrapper[4768]: I1124 18:53:08.059609 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e6cd1c8b-47af-4035-9e6f-601dd5b94cd3/cinder-api/0.log" Nov 24 18:53:08 crc kubenswrapper[4768]: I1124 18:53:08.221237 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e6cd1c8b-47af-4035-9e6f-601dd5b94cd3/cinder-api-log/0.log" Nov 24 18:53:08 crc kubenswrapper[4768]: I1124 18:53:08.571611 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9d187717-3b2d-42c1-9daa-6db0b5d2c14c/cinder-backup/0.log" Nov 24 18:53:08 crc kubenswrapper[4768]: I1124 18:53:08.841695 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9d187717-3b2d-42c1-9daa-6db0b5d2c14c/probe/0.log" Nov 24 18:53:08 crc kubenswrapper[4768]: I1124 18:53:08.888642 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_40369462-11a9-45f0-ad9b-cec7971e9414/cinder-scheduler/0.log" Nov 24 18:53:08 crc kubenswrapper[4768]: I1124 18:53:08.940717 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_40369462-11a9-45f0-ad9b-cec7971e9414/probe/0.log" Nov 24 18:53:09 crc kubenswrapper[4768]: I1124 18:53:09.100504 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_97567296-4a8c-4270-96b4-83eaabf8194b/probe/0.log" Nov 24 18:53:09 crc kubenswrapper[4768]: I1124 18:53:09.153098 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_97567296-4a8c-4270-96b4-83eaabf8194b/cinder-volume/0.log" Nov 24 18:53:09 crc kubenswrapper[4768]: I1124 18:53:09.318924 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7_1dd3638b-dad5-4d28-8451-1ef9cbe46251/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:09 crc kubenswrapper[4768]: I1124 18:53:09.388177 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz_cdd7e3c1-531f-4b9b-99bb-057c5078cf95/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:09 crc kubenswrapper[4768]: I1124 18:53:09.517229 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-l8v2f_841499fa-7a48-465c-891c-13987e5064d5/init/0.log" Nov 24 18:53:09 crc kubenswrapper[4768]: I1124 18:53:09.731225 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-l8v2f_841499fa-7a48-465c-891c-13987e5064d5/dnsmasq-dns/0.log" Nov 24 18:53:09 crc kubenswrapper[4768]: I1124 18:53:09.747524 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-l8v2f_841499fa-7a48-465c-891c-13987e5064d5/init/0.log" Nov 24 18:53:09 crc kubenswrapper[4768]: I1124 18:53:09.771053 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9/glance-log/0.log" Nov 24 18:53:09 crc kubenswrapper[4768]: I1124 18:53:09.823988 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9/glance-httpd/0.log" Nov 24 18:53:09 crc kubenswrapper[4768]: I1124 18:53:09.914714 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c7d82efd-27b9-4b06-a476-230d3dbbb176/glance-httpd/0.log" Nov 24 18:53:09 crc kubenswrapper[4768]: I1124 18:53:09.923275 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c7d82efd-27b9-4b06-a476-230d3dbbb176/glance-log/0.log" Nov 24 18:53:10 crc kubenswrapper[4768]: I1124 18:53:10.121047 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk_f5889b94-1134-4803-88de-f82ae87f5720/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:10 crc kubenswrapper[4768]: I1124 18:53:10.136860 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-85f468447b-zhvc8_cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274/horizon/0.log" Nov 24 18:53:10 crc kubenswrapper[4768]: I1124 18:53:10.236073 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-85f468447b-zhvc8_cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274/horizon-log/0.log" Nov 24 18:53:10 crc kubenswrapper[4768]: I1124 18:53:10.426625 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8th9m_afb6ccb1-e75a-470b-9755-a3359c7d23fd/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:10 crc kubenswrapper[4768]: I1124 18:53:10.721958 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_20d3ec89-0004-4ed5-ae4b-c9dcf85a3151/kube-state-metrics/0.log" Nov 24 18:53:10 crc kubenswrapper[4768]: I1124 18:53:10.752387 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-56748c45b5-4df84_434c7b39-9f1a-4032-b6fb-41c315a3a521/keystone-api/0.log" Nov 24 18:53:10 crc kubenswrapper[4768]: I1124 18:53:10.836988 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk_ad4a499f-9065-421e-9c19-6b6ae06f255e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:10 crc kubenswrapper[4768]: I1124 18:53:10.949018 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f30f2c98-4600-4324-b983-59a519225520/manila-api-log/0.log" Nov 24 18:53:11 crc kubenswrapper[4768]: I1124 18:53:11.024187 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-d357-account-create-jbk6f_f9eb5f31-ed6d-43b8-920a-9d6767e66382/mariadb-account-create/0.log" Nov 24 18:53:11 crc kubenswrapper[4768]: I1124 18:53:11.039542 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f30f2c98-4600-4324-b983-59a519225520/manila-api/0.log" Nov 24 18:53:11 crc kubenswrapper[4768]: I1124 18:53:11.188901 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-db-create-7pw6k_60ebe595-8584-4ad6-a043-b2df4d7cef79/mariadb-database-create/0.log" Nov 24 18:53:11 crc kubenswrapper[4768]: I1124 18:53:11.249360 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-db-sync-867w9_6f09743b-4494-416b-98c3-2bfe275c366c/manila-db-sync/0.log" Nov 24 18:53:11 crc kubenswrapper[4768]: I1124 18:53:11.387480 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_685b1427-a20b-4fb0-a6c9-42ec98f11d67/probe/0.log" Nov 24 18:53:11 crc kubenswrapper[4768]: I1124 18:53:11.490184 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_685b1427-a20b-4fb0-a6c9-42ec98f11d67/manila-scheduler/0.log" Nov 24 18:53:11 crc kubenswrapper[4768]: I1124 18:53:11.505277 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_59a6e210-36bf-431b-a1b4-3784ec202cde/manila-share/0.log" Nov 24 18:53:11 crc kubenswrapper[4768]: I1124 18:53:11.519270 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_59a6e210-36bf-431b-a1b4-3784ec202cde/probe/0.log" Nov 24 18:53:11 crc kubenswrapper[4768]: I1124 18:53:11.749555 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-844dbf79df-5t2np_6f9024a7-971e-460c-8b41-157dc2403a44/neutron-api/0.log" Nov 24 18:53:11 crc kubenswrapper[4768]: I1124 18:53:11.772292 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-844dbf79df-5t2np_6f9024a7-971e-460c-8b41-157dc2403a44/neutron-httpd/0.log" Nov 24 18:53:11 crc kubenswrapper[4768]: I1124 18:53:11.965304 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz_edac5bf5-aa67-431e-9e1a-3551d9323772/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:12 crc kubenswrapper[4768]: I1124 18:53:12.293901 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_09017e2b-873f-446e-9d2c-8dcdddb26732/nova-api-log/0.log" Nov 24 18:53:12 crc kubenswrapper[4768]: I1124 18:53:12.393543 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ae1cfe70-c0e5-4191-8605-c57257bfef1f/nova-cell0-conductor-conductor/0.log" Nov 24 18:53:12 crc kubenswrapper[4768]: I1124 18:53:12.432454 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_09017e2b-873f-446e-9d2c-8dcdddb26732/nova-api-api/0.log" Nov 24 18:53:12 crc kubenswrapper[4768]: I1124 18:53:12.588713 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_9883b617-fef7-4b4e-9856-e7075ba94d9e/nova-cell1-conductor-conductor/0.log" Nov 24 18:53:12 crc kubenswrapper[4768]: I1124 18:53:12.658589 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_2f5e8953-6f74-4185-8020-585c1fc3d9f1/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 18:53:12 crc kubenswrapper[4768]: I1124 18:53:12.894010 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz_fd99c2dc-4b0c-49e8-bc2e-59a8ad923066/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:12 crc kubenswrapper[4768]: I1124 18:53:12.990438 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9e26f8aa-16b5-445c-9568-4e56b3665004/nova-metadata-log/0.log" Nov 24 18:53:13 crc kubenswrapper[4768]: I1124 18:53:13.271370 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ba0653c2-07ff-4e12-a6ab-d1f1f81a5344/nova-scheduler-scheduler/0.log" Nov 24 18:53:13 crc kubenswrapper[4768]: I1124 18:53:13.300297 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_758c992e-f62f-4efd-af1d-0c1279d68544/mysql-bootstrap/0.log" Nov 24 18:53:13 crc kubenswrapper[4768]: I1124 18:53:13.488916 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_758c992e-f62f-4efd-af1d-0c1279d68544/mysql-bootstrap/0.log" Nov 24 18:53:13 crc kubenswrapper[4768]: I1124 18:53:13.492615 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_758c992e-f62f-4efd-af1d-0c1279d68544/galera/0.log" Nov 24 18:53:13 crc kubenswrapper[4768]: I1124 18:53:13.712091 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8145b894-fd09-47c1-b9c2-0cb4cfa6d293/mysql-bootstrap/0.log" Nov 24 18:53:13 crc kubenswrapper[4768]: I1124 18:53:13.822362 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_40180404-c438-415c-8787-05a1cc8461d0/memcached/0.log" Nov 24 18:53:13 crc kubenswrapper[4768]: I1124 18:53:13.862654 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8145b894-fd09-47c1-b9c2-0cb4cfa6d293/mysql-bootstrap/0.log" Nov 24 18:53:13 crc kubenswrapper[4768]: I1124 18:53:13.870331 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8145b894-fd09-47c1-b9c2-0cb4cfa6d293/galera/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.057822 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_e5ca5655-0b68-4c97-984f-2085144d98dc/openstackclient/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.081016 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-f9558_2beabb7a-c951-4e24-8a6e-83ceb0ebb087/openstack-network-exporter/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.120896 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9e26f8aa-16b5-445c-9568-4e56b3665004/nova-metadata-metadata/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.258132 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xb8qp_509a2a18-bedf-4f92-bc91-608b5af92c1e/ovsdb-server-init/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.568205 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xb8qp_509a2a18-bedf-4f92-bc91-608b5af92c1e/ovsdb-server-init/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.602574 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xb8qp_509a2a18-bedf-4f92-bc91-608b5af92c1e/ovs-vswitchd/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.631028 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zlg8p_710c430d-b973-47b9-9917-2db7864f7570/ovn-controller/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.631991 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xb8qp_509a2a18-bedf-4f92-bc91-608b5af92c1e/ovsdb-server/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.780782 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-zcf2w_fd87ee72-91d9-40a2-a95f-f4358b524d8f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.818311 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_09191ff5-4686-4243-a0b4-3dd710ead568/openstack-network-exporter/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.860334 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_09191ff5-4686-4243-a0b4-3dd710ead568/ovn-northd/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.898152 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:53:14 crc kubenswrapper[4768]: E1124 18:53:14.898522 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.980345 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c15f153b-967a-4edd-8c49-fd474a1d5de3/openstack-network-exporter/0.log" Nov 24 18:53:14 crc kubenswrapper[4768]: I1124 18:53:14.981208 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c15f153b-967a-4edd-8c49-fd474a1d5de3/ovsdbserver-nb/0.log" Nov 24 18:53:15 crc kubenswrapper[4768]: I1124 18:53:15.063056 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4b5d5ef6-f6b9-4930-8426-a0718b3a754f/openstack-network-exporter/0.log" Nov 24 18:53:15 crc kubenswrapper[4768]: I1124 18:53:15.236804 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-76b54949f4-59kjn_43c2665c-ef67-4325-bad9-7e42cf3195bd/placement-api/0.log" Nov 24 18:53:15 crc kubenswrapper[4768]: I1124 18:53:15.250984 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4b5d5ef6-f6b9-4930-8426-a0718b3a754f/ovsdbserver-sb/0.log" Nov 24 18:53:15 crc kubenswrapper[4768]: I1124 18:53:15.331988 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-76b54949f4-59kjn_43c2665c-ef67-4325-bad9-7e42cf3195bd/placement-log/0.log" Nov 24 18:53:15 crc kubenswrapper[4768]: I1124 18:53:15.397147 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f61bf1e8-52b3-4777-ad9b-52c8a1cad06c/setup-container/0.log" Nov 24 18:53:15 crc kubenswrapper[4768]: I1124 18:53:15.574981 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f61bf1e8-52b3-4777-ad9b-52c8a1cad06c/setup-container/0.log" Nov 24 18:53:15 crc kubenswrapper[4768]: I1124 18:53:15.635069 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2d3ded99-92ff-43cc-83de-6042d6c83acf/setup-container/0.log" Nov 24 18:53:15 crc kubenswrapper[4768]: I1124 18:53:15.651790 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f61bf1e8-52b3-4777-ad9b-52c8a1cad06c/rabbitmq/0.log" Nov 24 18:53:15 crc kubenswrapper[4768]: I1124 18:53:15.818674 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2d3ded99-92ff-43cc-83de-6042d6c83acf/setup-container/0.log" Nov 24 18:53:15 crc kubenswrapper[4768]: I1124 18:53:15.873717 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2d3ded99-92ff-43cc-83de-6042d6c83acf/rabbitmq/0.log" Nov 24 18:53:15 crc kubenswrapper[4768]: I1124 18:53:15.877522 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg_474b1f4d-271b-4abb-bad4-fef9d86fff99/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:16 crc kubenswrapper[4768]: I1124 18:53:16.007654 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b_e2f4a9fd-b80f-44d1-80b8-298119d3b967/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:16 crc kubenswrapper[4768]: I1124 18:53:16.079218 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gdnxb_621b6bcf-7a5c-4a85-9a8f-379e95bad6ac/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:16 crc kubenswrapper[4768]: I1124 18:53:16.140988 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-l4ldg_e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76/ssh-known-hosts-edpm-deployment/0.log" Nov 24 18:53:16 crc kubenswrapper[4768]: I1124 18:53:16.968172 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a70c965c-d29f-4286-b2e4-a580073783c5/tempest-tests-tempest-tests-runner/0.log" Nov 24 18:53:17 crc kubenswrapper[4768]: I1124 18:53:17.019220 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_40331542-20c7-4f93-8571-cc1bcaad9d48/test-operator-logs-container/0.log" Nov 24 18:53:17 crc kubenswrapper[4768]: I1124 18:53:17.161260 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-84jr8_0ca0ce9c-abe8-49c5-9aed-d63e4bae7811/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 18:53:27 crc kubenswrapper[4768]: I1124 18:53:27.898393 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:53:27 crc kubenswrapper[4768]: E1124 18:53:27.899166 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:53:28 crc kubenswrapper[4768]: I1124 18:53:28.063878 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-7pw6k"] Nov 24 18:53:28 crc kubenswrapper[4768]: I1124 18:53:28.073841 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-d357-account-create-jbk6f"] Nov 24 18:53:28 crc kubenswrapper[4768]: I1124 18:53:28.083421 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-7pw6k"] Nov 24 18:53:28 crc kubenswrapper[4768]: I1124 18:53:28.095169 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-d357-account-create-jbk6f"] Nov 24 18:53:29 crc kubenswrapper[4768]: I1124 18:53:29.910964 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ebe595-8584-4ad6-a043-b2df4d7cef79" path="/var/lib/kubelet/pods/60ebe595-8584-4ad6-a043-b2df4d7cef79/volumes" Nov 24 18:53:29 crc kubenswrapper[4768]: I1124 18:53:29.912267 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9eb5f31-ed6d-43b8-920a-9d6767e66382" path="/var/lib/kubelet/pods/f9eb5f31-ed6d-43b8-920a-9d6767e66382/volumes" Nov 24 18:53:40 crc kubenswrapper[4768]: I1124 18:53:40.899811 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:53:40 crc kubenswrapper[4768]: E1124 18:53:40.901098 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:53:42 crc kubenswrapper[4768]: I1124 18:53:42.953861 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/util/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.151782 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/pull/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.169808 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/util/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.170027 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/pull/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.382850 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/pull/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.382900 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/extract/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.389530 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/util/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.552268 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-wtd7r_c6d746c7-cf41-4ebd-95ba-e23836f6e5d4/kube-rbac-proxy/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.603541 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-wtd7r_c6d746c7-cf41-4ebd-95ba-e23836f6e5d4/manager/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.626715 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-nx9kk_ab197189-f8ba-4b06-b62a-73dd90994a39/kube-rbac-proxy/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.793190 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-nx9kk_ab197189-f8ba-4b06-b62a-73dd90994a39/manager/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.805335 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-jg4mn_52de35ae-ab63-4e1b-88d1-e42033ee56b7/kube-rbac-proxy/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.846663 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-jg4mn_52de35ae-ab63-4e1b-88d1-e42033ee56b7/manager/0.log" Nov 24 18:53:43 crc kubenswrapper[4768]: I1124 18:53:43.991653 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-69fbff6fff-t2zl8_28171867-a10a-4f0c-840d-ce55038bcd93/kube-rbac-proxy/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.126346 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-69fbff6fff-t2zl8_28171867-a10a-4f0c-840d-ce55038bcd93/manager/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.179064 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-xw2jj_afa155f0-dde8-4d99-a454-527207b3189c/kube-rbac-proxy/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.185458 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-xw2jj_afa155f0-dde8-4d99-a454-527207b3189c/manager/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.324678 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-k5fkx_34b164fd-5d2f-4c00-83dc-ad8a90f4b94c/kube-rbac-proxy/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.354728 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-k5fkx_34b164fd-5d2f-4c00-83dc-ad8a90f4b94c/manager/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.499022 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858778c9dc-2wljz_b44a0f95-c792-4375-9292-34a95608c64f/kube-rbac-proxy/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.531099 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-m6skf_ab3b5e40-6284-45cb-822e-a9490b1794c5/kube-rbac-proxy/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.625064 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858778c9dc-2wljz_b44a0f95-c792-4375-9292-34a95608c64f/manager/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.673839 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-m6skf_ab3b5e40-6284-45cb-822e-a9490b1794c5/manager/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.743176 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-5sprh_8d6fc3b4-896a-4480-9371-930a2882151e/kube-rbac-proxy/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.855657 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-5sprh_8d6fc3b4-896a-4480-9371-930a2882151e/manager/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.907591 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-b6vk2_8d92c413-b62d-4896-ae13-1ee9608aa65a/kube-rbac-proxy/0.log" Nov 24 18:53:44 crc kubenswrapper[4768]: I1124 18:53:44.969072 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-b6vk2_8d92c413-b62d-4896-ae13-1ee9608aa65a/manager/0.log" Nov 24 18:53:45 crc kubenswrapper[4768]: I1124 18:53:45.040278 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-846gl_2c04229f-5a27-4477-816d-60d5f1977144/kube-rbac-proxy/0.log" Nov 24 18:53:45 crc kubenswrapper[4768]: I1124 18:53:45.132189 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-846gl_2c04229f-5a27-4477-816d-60d5f1977144/manager/0.log" Nov 24 18:53:45 crc kubenswrapper[4768]: I1124 18:53:45.225058 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-hdfsr_7a599ec7-7361-4e08-8d81-3cfc208d41b5/kube-rbac-proxy/0.log" Nov 24 18:53:45 crc kubenswrapper[4768]: I1124 18:53:45.262063 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-hdfsr_7a599ec7-7361-4e08-8d81-3cfc208d41b5/manager/0.log" Nov 24 18:53:45 crc kubenswrapper[4768]: I1124 18:53:45.389343 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-4mqdl_583db3d6-5f9c-4ce1-8214-06963fe50f96/kube-rbac-proxy/0.log" Nov 24 18:53:45 crc kubenswrapper[4768]: I1124 18:53:45.461953 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-4mqdl_583db3d6-5f9c-4ce1-8214-06963fe50f96/manager/0.log" Nov 24 18:53:45 crc kubenswrapper[4768]: I1124 18:53:45.577763 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-f95nv_29ac0137-f29a-4a1f-8435-f4ec688a5948/manager/0.log" Nov 24 18:53:45 crc kubenswrapper[4768]: I1124 18:53:45.611505 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-f95nv_29ac0137-f29a-4a1f-8435-f4ec688a5948/kube-rbac-proxy/0.log" Nov 24 18:53:45 crc kubenswrapper[4768]: I1124 18:53:45.713811 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-lv927_d54c925d-91d6-4bb8-acff-623c4f213352/kube-rbac-proxy/0.log" Nov 24 18:53:45 crc kubenswrapper[4768]: I1124 18:53:45.793081 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-lv927_d54c925d-91d6-4bb8-acff-623c4f213352/manager/0.log" Nov 24 18:53:46 crc kubenswrapper[4768]: I1124 18:53:46.091060 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xx6dm_911161df-90b7-4df2-93d4-9e91b2bf2e91/registry-server/0.log" Nov 24 18:53:46 crc kubenswrapper[4768]: I1124 18:53:46.242975 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7b874cbcf5-5ssbf_029c591e-99fb-494c-93f1-c695b2b8b744/operator/0.log" Nov 24 18:53:46 crc kubenswrapper[4768]: I1124 18:53:46.250510 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-fz64p_0f74f3df-ed63-4105-882e-c3122177da3a/kube-rbac-proxy/0.log" Nov 24 18:53:46 crc kubenswrapper[4768]: I1124 18:53:46.392791 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-fz64p_0f74f3df-ed63-4105-882e-c3122177da3a/manager/0.log" Nov 24 18:53:46 crc kubenswrapper[4768]: I1124 18:53:46.462395 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-2t64b_78e75462-3120-4d07-a571-56727914e173/kube-rbac-proxy/0.log" Nov 24 18:53:46 crc kubenswrapper[4768]: I1124 18:53:46.483628 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-2t64b_78e75462-3120-4d07-a571-56727914e173/manager/0.log" Nov 24 18:53:46 crc kubenswrapper[4768]: I1124 18:53:46.672457 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-4dwgz_8fe91de1-efe8-43e5-8b29-89043d06e880/kube-rbac-proxy/0.log" Nov 24 18:53:46 crc kubenswrapper[4768]: I1124 18:53:46.687015 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-csz8k_dfa124f2-a194-4cae-bfed-eb56288e56a6/operator/0.log" Nov 24 18:53:46 crc kubenswrapper[4768]: I1124 18:53:46.804553 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-4dwgz_8fe91de1-efe8-43e5-8b29-89043d06e880/manager/0.log" Nov 24 18:53:46 crc kubenswrapper[4768]: I1124 18:53:46.869430 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-lfbgz_4d4b069e-80e6-409b-aeee-130ac4351f32/kube-rbac-proxy/0.log" Nov 24 18:53:46 crc kubenswrapper[4768]: I1124 18:53:46.984626 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-lfbgz_4d4b069e-80e6-409b-aeee-130ac4351f32/manager/0.log" Nov 24 18:53:47 crc kubenswrapper[4768]: I1124 18:53:47.121364 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-d2hdv_1f0a9442-916e-442d-bb0f-6060ba5915c8/kube-rbac-proxy/0.log" Nov 24 18:53:47 crc kubenswrapper[4768]: I1124 18:53:47.137964 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-d2hdv_1f0a9442-916e-442d-bb0f-6060ba5915c8/manager/0.log" Nov 24 18:53:47 crc kubenswrapper[4768]: I1124 18:53:47.290868 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-2264q_c6d6eee2-6cb1-411d-837f-921b1c6c92fb/kube-rbac-proxy/0.log" Nov 24 18:53:47 crc kubenswrapper[4768]: I1124 18:53:47.332338 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bdb766b46-6b4tf_ba241c62-4e0e-4e9b-bff9-4f590d0a1d28/manager/0.log" Nov 24 18:53:47 crc kubenswrapper[4768]: I1124 18:53:47.336918 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-2264q_c6d6eee2-6cb1-411d-837f-921b1c6c92fb/manager/0.log" Nov 24 18:53:51 crc kubenswrapper[4768]: I1124 18:53:51.905320 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:53:51 crc kubenswrapper[4768]: E1124 18:53:51.906347 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:53:53 crc kubenswrapper[4768]: I1124 18:53:53.080190 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-867w9"] Nov 24 18:53:53 crc kubenswrapper[4768]: I1124 18:53:53.090574 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-867w9"] Nov 24 18:53:53 crc kubenswrapper[4768]: I1124 18:53:53.910170 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f09743b-4494-416b-98c3-2bfe275c366c" path="/var/lib/kubelet/pods/6f09743b-4494-416b-98c3-2bfe275c366c/volumes" Nov 24 18:54:00 crc kubenswrapper[4768]: I1124 18:54:00.501310 4768 scope.go:117] "RemoveContainer" containerID="6dec84c3a33543f5fb68adabd566b6c0190c109424caffcc500b6fc58a829261" Nov 24 18:54:00 crc kubenswrapper[4768]: I1124 18:54:00.548787 4768 scope.go:117] "RemoveContainer" containerID="c47be35bbb70f8880f79dc4121a58457bb7968fcc1324bd4efd66903fc4868e2" Nov 24 18:54:00 crc kubenswrapper[4768]: I1124 18:54:00.608203 4768 scope.go:117] "RemoveContainer" containerID="75c6edd6b3fbbd225044235f3b2a32887b2be8dd715f28ab67da0fdc9b6995f8" Nov 24 18:54:05 crc kubenswrapper[4768]: I1124 18:54:05.898819 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:54:05 crc kubenswrapper[4768]: E1124 18:54:05.899668 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:54:06 crc kubenswrapper[4768]: I1124 18:54:06.015386 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rrct4_622d16ca-1d8c-49e7-8ad7-c7b33b9003f2/control-plane-machine-set-operator/0.log" Nov 24 18:54:06 crc kubenswrapper[4768]: I1124 18:54:06.209026 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xxlhx_f4312574-3ae8-49f4-a799-e20198b71149/kube-rbac-proxy/0.log" Nov 24 18:54:06 crc kubenswrapper[4768]: I1124 18:54:06.260374 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xxlhx_f4312574-3ae8-49f4-a799-e20198b71149/machine-api-operator/0.log" Nov 24 18:54:20 crc kubenswrapper[4768]: I1124 18:54:20.305706 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-66xg6_24caa3d8-4ce8-4918-82c5-2c71e2b95e01/cert-manager-controller/0.log" Nov 24 18:54:20 crc kubenswrapper[4768]: I1124 18:54:20.339053 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-8nrg2_62d5c0eb-892b-455f-8ddd-b2fdb47ea42d/cert-manager-cainjector/0.log" Nov 24 18:54:20 crc kubenswrapper[4768]: I1124 18:54:20.480433 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-2qvx7_3d150fe0-3a31-4024-b158-8dd172e9aa1e/cert-manager-webhook/0.log" Nov 24 18:54:20 crc kubenswrapper[4768]: I1124 18:54:20.898837 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:54:20 crc kubenswrapper[4768]: E1124 18:54:20.899284 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:54:33 crc kubenswrapper[4768]: I1124 18:54:33.817632 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qxtkj_70c8f860-b6e0-4407-bfd8-be567169db2c/nmstate-handler/0.log" Nov 24 18:54:33 crc kubenswrapper[4768]: I1124 18:54:33.846724 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-hnwzz_07b3a9eb-7a3b-4f8c-b205-0becb2a0168b/nmstate-console-plugin/0.log" Nov 24 18:54:33 crc kubenswrapper[4768]: I1124 18:54:33.975332 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-676sm_822888f3-7b2d-48e4-a58e-42885dd6edf0/kube-rbac-proxy/0.log" Nov 24 18:54:34 crc kubenswrapper[4768]: I1124 18:54:34.014473 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-676sm_822888f3-7b2d-48e4-a58e-42885dd6edf0/nmstate-metrics/0.log" Nov 24 18:54:34 crc kubenswrapper[4768]: I1124 18:54:34.150090 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-65z5p_2de3be4f-3f3a-4789-ad93-341bc12f368e/nmstate-operator/0.log" Nov 24 18:54:34 crc kubenswrapper[4768]: I1124 18:54:34.215755 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-fdltj_204f91a8-34ab-4a27-96eb-1602cb1f1ed8/nmstate-webhook/0.log" Nov 24 18:54:34 crc kubenswrapper[4768]: I1124 18:54:34.898637 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:54:34 crc kubenswrapper[4768]: E1124 18:54:34.899139 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:54:47 crc kubenswrapper[4768]: I1124 18:54:47.898327 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:54:47 crc kubenswrapper[4768]: E1124 18:54:47.899094 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:54:48 crc kubenswrapper[4768]: I1124 18:54:48.873672 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-8wcfs_d270c276-5cc7-40cb-a690-27a3e3b5d29a/controller/0.log" Nov 24 18:54:48 crc kubenswrapper[4768]: I1124 18:54:48.897211 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-8wcfs_d270c276-5cc7-40cb-a690-27a3e3b5d29a/kube-rbac-proxy/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.030753 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-frr-files/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.221838 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-reloader/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.253660 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-frr-files/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.259922 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-reloader/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.268213 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-metrics/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.449619 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-frr-files/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.484750 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-reloader/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.493043 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-metrics/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.513676 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-metrics/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.692989 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-reloader/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.694296 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-frr-files/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.706135 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/controller/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.717914 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-metrics/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.873236 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/kube-rbac-proxy/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.882005 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/frr-metrics/0.log" Nov 24 18:54:49 crc kubenswrapper[4768]: I1124 18:54:49.913521 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/kube-rbac-proxy-frr/0.log" Nov 24 18:54:50 crc kubenswrapper[4768]: I1124 18:54:50.148271 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-bmlh2_d52b407a-4b4f-47ce-9cc4-244b3fca2db4/frr-k8s-webhook-server/0.log" Nov 24 18:54:50 crc kubenswrapper[4768]: I1124 18:54:50.150099 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/reloader/0.log" Nov 24 18:54:50 crc kubenswrapper[4768]: I1124 18:54:50.410527 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65d776c5c5-mm52q_59812c96-7130-431b-8e63-08a04a76a481/manager/0.log" Nov 24 18:54:50 crc kubenswrapper[4768]: I1124 18:54:50.610446 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-ddc448d79-8bqsf_60867050-3f57-4b08-ace3-524c54adfeff/webhook-server/0.log" Nov 24 18:54:50 crc kubenswrapper[4768]: I1124 18:54:50.649156 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xj9kr_d1e6e133-4775-411b-b0e1-516e2cd2e276/kube-rbac-proxy/0.log" Nov 24 18:54:51 crc kubenswrapper[4768]: I1124 18:54:51.284468 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xj9kr_d1e6e133-4775-411b-b0e1-516e2cd2e276/speaker/0.log" Nov 24 18:54:51 crc kubenswrapper[4768]: I1124 18:54:51.332262 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/frr/0.log" Nov 24 18:54:55 crc kubenswrapper[4768]: I1124 18:54:55.767217 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2vvgq"] Nov 24 18:54:55 crc kubenswrapper[4768]: E1124 18:54:55.768192 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bdf0f0a-88bf-4a26-a622-1815fdd3031f" containerName="registry-server" Nov 24 18:54:55 crc kubenswrapper[4768]: I1124 18:54:55.768205 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bdf0f0a-88bf-4a26-a622-1815fdd3031f" containerName="registry-server" Nov 24 18:54:55 crc kubenswrapper[4768]: E1124 18:54:55.768221 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bdf0f0a-88bf-4a26-a622-1815fdd3031f" containerName="extract-utilities" Nov 24 18:54:55 crc kubenswrapper[4768]: I1124 18:54:55.768228 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bdf0f0a-88bf-4a26-a622-1815fdd3031f" containerName="extract-utilities" Nov 24 18:54:55 crc kubenswrapper[4768]: E1124 18:54:55.768263 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bdf0f0a-88bf-4a26-a622-1815fdd3031f" containerName="extract-content" Nov 24 18:54:55 crc kubenswrapper[4768]: I1124 18:54:55.768271 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bdf0f0a-88bf-4a26-a622-1815fdd3031f" containerName="extract-content" Nov 24 18:54:55 crc kubenswrapper[4768]: I1124 18:54:55.768473 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bdf0f0a-88bf-4a26-a622-1815fdd3031f" containerName="registry-server" Nov 24 18:54:55 crc kubenswrapper[4768]: I1124 18:54:55.769750 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:54:55 crc kubenswrapper[4768]: I1124 18:54:55.788850 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2vvgq"] Nov 24 18:54:55 crc kubenswrapper[4768]: I1124 18:54:55.899778 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36363f94-1333-4fcd-baea-6b900442ff18-utilities\") pod \"certified-operators-2vvgq\" (UID: \"36363f94-1333-4fcd-baea-6b900442ff18\") " pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:54:55 crc kubenswrapper[4768]: I1124 18:54:55.899932 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36363f94-1333-4fcd-baea-6b900442ff18-catalog-content\") pod \"certified-operators-2vvgq\" (UID: \"36363f94-1333-4fcd-baea-6b900442ff18\") " pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:54:55 crc kubenswrapper[4768]: I1124 18:54:55.900269 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg8jc\" (UniqueName: \"kubernetes.io/projected/36363f94-1333-4fcd-baea-6b900442ff18-kube-api-access-bg8jc\") pod \"certified-operators-2vvgq\" (UID: \"36363f94-1333-4fcd-baea-6b900442ff18\") " pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:54:56 crc kubenswrapper[4768]: I1124 18:54:56.002610 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36363f94-1333-4fcd-baea-6b900442ff18-utilities\") pod \"certified-operators-2vvgq\" (UID: \"36363f94-1333-4fcd-baea-6b900442ff18\") " pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:54:56 crc kubenswrapper[4768]: I1124 18:54:56.002714 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36363f94-1333-4fcd-baea-6b900442ff18-catalog-content\") pod \"certified-operators-2vvgq\" (UID: \"36363f94-1333-4fcd-baea-6b900442ff18\") " pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:54:56 crc kubenswrapper[4768]: I1124 18:54:56.002791 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg8jc\" (UniqueName: \"kubernetes.io/projected/36363f94-1333-4fcd-baea-6b900442ff18-kube-api-access-bg8jc\") pod \"certified-operators-2vvgq\" (UID: \"36363f94-1333-4fcd-baea-6b900442ff18\") " pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:54:56 crc kubenswrapper[4768]: I1124 18:54:56.003536 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36363f94-1333-4fcd-baea-6b900442ff18-catalog-content\") pod \"certified-operators-2vvgq\" (UID: \"36363f94-1333-4fcd-baea-6b900442ff18\") " pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:54:56 crc kubenswrapper[4768]: I1124 18:54:56.003679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36363f94-1333-4fcd-baea-6b900442ff18-utilities\") pod \"certified-operators-2vvgq\" (UID: \"36363f94-1333-4fcd-baea-6b900442ff18\") " pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:54:56 crc kubenswrapper[4768]: I1124 18:54:56.025554 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg8jc\" (UniqueName: \"kubernetes.io/projected/36363f94-1333-4fcd-baea-6b900442ff18-kube-api-access-bg8jc\") pod \"certified-operators-2vvgq\" (UID: \"36363f94-1333-4fcd-baea-6b900442ff18\") " pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:54:56 crc kubenswrapper[4768]: I1124 18:54:56.113330 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:54:56 crc kubenswrapper[4768]: I1124 18:54:56.619082 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2vvgq"] Nov 24 18:54:57 crc kubenswrapper[4768]: I1124 18:54:57.464401 4768 generic.go:334] "Generic (PLEG): container finished" podID="36363f94-1333-4fcd-baea-6b900442ff18" containerID="4f60ea7b29c9d2a098670b49ea01b33bceb7b66c0a3f3dc11dc3bcb4ccc53de7" exitCode=0 Nov 24 18:54:57 crc kubenswrapper[4768]: I1124 18:54:57.464696 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vvgq" event={"ID":"36363f94-1333-4fcd-baea-6b900442ff18","Type":"ContainerDied","Data":"4f60ea7b29c9d2a098670b49ea01b33bceb7b66c0a3f3dc11dc3bcb4ccc53de7"} Nov 24 18:54:57 crc kubenswrapper[4768]: I1124 18:54:57.464988 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vvgq" event={"ID":"36363f94-1333-4fcd-baea-6b900442ff18","Type":"ContainerStarted","Data":"374abdc71b5f3dec4e4b921cf10e6f10dd61472249ca54264dd06e24538ed1dd"} Nov 24 18:54:59 crc kubenswrapper[4768]: I1124 18:54:59.483652 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vvgq" event={"ID":"36363f94-1333-4fcd-baea-6b900442ff18","Type":"ContainerStarted","Data":"6c57f1218e11c013df0c72d49e406984395b137f27c1ef39ec45f96f4594054f"} Nov 24 18:55:00 crc kubenswrapper[4768]: I1124 18:55:00.504975 4768 generic.go:334] "Generic (PLEG): container finished" podID="36363f94-1333-4fcd-baea-6b900442ff18" containerID="6c57f1218e11c013df0c72d49e406984395b137f27c1ef39ec45f96f4594054f" exitCode=0 Nov 24 18:55:00 crc kubenswrapper[4768]: I1124 18:55:00.505480 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vvgq" event={"ID":"36363f94-1333-4fcd-baea-6b900442ff18","Type":"ContainerDied","Data":"6c57f1218e11c013df0c72d49e406984395b137f27c1ef39ec45f96f4594054f"} Nov 24 18:55:01 crc kubenswrapper[4768]: I1124 18:55:01.519285 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vvgq" event={"ID":"36363f94-1333-4fcd-baea-6b900442ff18","Type":"ContainerStarted","Data":"82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e"} Nov 24 18:55:01 crc kubenswrapper[4768]: I1124 18:55:01.545566 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2vvgq" podStartSLOduration=3.046750567 podStartE2EDuration="6.545545516s" podCreationTimestamp="2025-11-24 18:54:55 +0000 UTC" firstStartedPulling="2025-11-24 18:54:57.467020547 +0000 UTC m=+3936.327602324" lastFinishedPulling="2025-11-24 18:55:00.965815496 +0000 UTC m=+3939.826397273" observedRunningTime="2025-11-24 18:55:01.535789894 +0000 UTC m=+3940.396371731" watchObservedRunningTime="2025-11-24 18:55:01.545545516 +0000 UTC m=+3940.406127293" Nov 24 18:55:01 crc kubenswrapper[4768]: I1124 18:55:01.913029 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:55:01 crc kubenswrapper[4768]: E1124 18:55:01.917010 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:55:04 crc kubenswrapper[4768]: I1124 18:55:04.735578 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/util/0.log" Nov 24 18:55:04 crc kubenswrapper[4768]: I1124 18:55:04.885423 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/util/0.log" Nov 24 18:55:04 crc kubenswrapper[4768]: I1124 18:55:04.911929 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/pull/0.log" Nov 24 18:55:04 crc kubenswrapper[4768]: I1124 18:55:04.935980 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/pull/0.log" Nov 24 18:55:05 crc kubenswrapper[4768]: I1124 18:55:05.110644 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/pull/0.log" Nov 24 18:55:05 crc kubenswrapper[4768]: I1124 18:55:05.123848 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/util/0.log" Nov 24 18:55:05 crc kubenswrapper[4768]: I1124 18:55:05.135114 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/extract/0.log" Nov 24 18:55:05 crc kubenswrapper[4768]: I1124 18:55:05.279002 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vvgq_36363f94-1333-4fcd-baea-6b900442ff18/extract-utilities/0.log" Nov 24 18:55:05 crc kubenswrapper[4768]: I1124 18:55:05.415336 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vvgq_36363f94-1333-4fcd-baea-6b900442ff18/extract-utilities/0.log" Nov 24 18:55:05 crc kubenswrapper[4768]: I1124 18:55:05.459086 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vvgq_36363f94-1333-4fcd-baea-6b900442ff18/extract-content/0.log" Nov 24 18:55:05 crc kubenswrapper[4768]: I1124 18:55:05.467263 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vvgq_36363f94-1333-4fcd-baea-6b900442ff18/extract-content/0.log" Nov 24 18:55:05 crc kubenswrapper[4768]: I1124 18:55:05.621346 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vvgq_36363f94-1333-4fcd-baea-6b900442ff18/extract-content/0.log" Nov 24 18:55:05 crc kubenswrapper[4768]: I1124 18:55:05.624846 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vvgq_36363f94-1333-4fcd-baea-6b900442ff18/extract-utilities/0.log" Nov 24 18:55:05 crc kubenswrapper[4768]: I1124 18:55:05.641246 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2vvgq_36363f94-1333-4fcd-baea-6b900442ff18/registry-server/0.log" Nov 24 18:55:05 crc kubenswrapper[4768]: I1124 18:55:05.797340 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/extract-utilities/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.030519 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/extract-utilities/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.030846 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/extract-content/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.031003 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/extract-content/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.113759 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.114074 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.189769 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.222147 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/extract-content/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.230578 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/extract-utilities/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.435978 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/extract-utilities/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.448560 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/registry-server/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.598365 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/extract-content/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.609548 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/extract-content/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.610548 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/extract-utilities/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.615216 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.659852 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2vvgq"] Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.765106 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/extract-utilities/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.801455 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/extract-content/0.log" Nov 24 18:55:06 crc kubenswrapper[4768]: I1124 18:55:06.981226 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/util/0.log" Nov 24 18:55:07 crc kubenswrapper[4768]: I1124 18:55:07.257828 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/pull/0.log" Nov 24 18:55:07 crc kubenswrapper[4768]: I1124 18:55:07.274891 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/util/0.log" Nov 24 18:55:07 crc kubenswrapper[4768]: I1124 18:55:07.289363 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/pull/0.log" Nov 24 18:55:07 crc kubenswrapper[4768]: I1124 18:55:07.420861 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/util/0.log" Nov 24 18:55:07 crc kubenswrapper[4768]: I1124 18:55:07.548238 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/pull/0.log" Nov 24 18:55:07 crc kubenswrapper[4768]: I1124 18:55:07.584499 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/extract/0.log" Nov 24 18:55:07 crc kubenswrapper[4768]: I1124 18:55:07.606964 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/registry-server/0.log" Nov 24 18:55:07 crc kubenswrapper[4768]: I1124 18:55:07.733372 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vtrzd_d17e8f38-c1cf-4774-ad10-d2e08512c158/marketplace-operator/0.log" Nov 24 18:55:07 crc kubenswrapper[4768]: I1124 18:55:07.794058 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/extract-utilities/0.log" Nov 24 18:55:07 crc kubenswrapper[4768]: I1124 18:55:07.935289 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/extract-utilities/0.log" Nov 24 18:55:07 crc kubenswrapper[4768]: I1124 18:55:07.935348 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/extract-content/0.log" Nov 24 18:55:07 crc kubenswrapper[4768]: I1124 18:55:07.939853 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/extract-content/0.log" Nov 24 18:55:08 crc kubenswrapper[4768]: I1124 18:55:08.082378 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/extract-utilities/0.log" Nov 24 18:55:08 crc kubenswrapper[4768]: I1124 18:55:08.100401 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/extract-content/0.log" Nov 24 18:55:08 crc kubenswrapper[4768]: I1124 18:55:08.220516 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/extract-utilities/0.log" Nov 24 18:55:08 crc kubenswrapper[4768]: I1124 18:55:08.270732 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/registry-server/0.log" Nov 24 18:55:08 crc kubenswrapper[4768]: I1124 18:55:08.305696 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/extract-utilities/0.log" Nov 24 18:55:08 crc kubenswrapper[4768]: I1124 18:55:08.325151 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/extract-content/0.log" Nov 24 18:55:08 crc kubenswrapper[4768]: I1124 18:55:08.373987 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/extract-content/0.log" Nov 24 18:55:08 crc kubenswrapper[4768]: I1124 18:55:08.534690 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/extract-utilities/0.log" Nov 24 18:55:08 crc kubenswrapper[4768]: I1124 18:55:08.578703 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2vvgq" podUID="36363f94-1333-4fcd-baea-6b900442ff18" containerName="registry-server" containerID="cri-o://82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e" gracePeriod=2 Nov 24 18:55:08 crc kubenswrapper[4768]: I1124 18:55:08.590189 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/extract-content/0.log" Nov 24 18:55:08 crc kubenswrapper[4768]: I1124 18:55:08.794332 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/registry-server/0.log" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.033427 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.117713 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36363f94-1333-4fcd-baea-6b900442ff18-catalog-content\") pod \"36363f94-1333-4fcd-baea-6b900442ff18\" (UID: \"36363f94-1333-4fcd-baea-6b900442ff18\") " Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.117874 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg8jc\" (UniqueName: \"kubernetes.io/projected/36363f94-1333-4fcd-baea-6b900442ff18-kube-api-access-bg8jc\") pod \"36363f94-1333-4fcd-baea-6b900442ff18\" (UID: \"36363f94-1333-4fcd-baea-6b900442ff18\") " Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.117920 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36363f94-1333-4fcd-baea-6b900442ff18-utilities\") pod \"36363f94-1333-4fcd-baea-6b900442ff18\" (UID: \"36363f94-1333-4fcd-baea-6b900442ff18\") " Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.119166 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36363f94-1333-4fcd-baea-6b900442ff18-utilities" (OuterVolumeSpecName: "utilities") pod "36363f94-1333-4fcd-baea-6b900442ff18" (UID: "36363f94-1333-4fcd-baea-6b900442ff18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.123556 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36363f94-1333-4fcd-baea-6b900442ff18-kube-api-access-bg8jc" (OuterVolumeSpecName: "kube-api-access-bg8jc") pod "36363f94-1333-4fcd-baea-6b900442ff18" (UID: "36363f94-1333-4fcd-baea-6b900442ff18"). InnerVolumeSpecName "kube-api-access-bg8jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.209519 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36363f94-1333-4fcd-baea-6b900442ff18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36363f94-1333-4fcd-baea-6b900442ff18" (UID: "36363f94-1333-4fcd-baea-6b900442ff18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.220658 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg8jc\" (UniqueName: \"kubernetes.io/projected/36363f94-1333-4fcd-baea-6b900442ff18-kube-api-access-bg8jc\") on node \"crc\" DevicePath \"\"" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.220694 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36363f94-1333-4fcd-baea-6b900442ff18-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.220709 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36363f94-1333-4fcd-baea-6b900442ff18-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.590406 4768 generic.go:334] "Generic (PLEG): container finished" podID="36363f94-1333-4fcd-baea-6b900442ff18" containerID="82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e" exitCode=0 Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.590467 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vvgq" event={"ID":"36363f94-1333-4fcd-baea-6b900442ff18","Type":"ContainerDied","Data":"82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e"} Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.590561 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vvgq" event={"ID":"36363f94-1333-4fcd-baea-6b900442ff18","Type":"ContainerDied","Data":"374abdc71b5f3dec4e4b921cf10e6f10dd61472249ca54264dd06e24538ed1dd"} Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.590596 4768 scope.go:117] "RemoveContainer" containerID="82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.590825 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2vvgq" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.624316 4768 scope.go:117] "RemoveContainer" containerID="6c57f1218e11c013df0c72d49e406984395b137f27c1ef39ec45f96f4594054f" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.663811 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2vvgq"] Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.669983 4768 scope.go:117] "RemoveContainer" containerID="4f60ea7b29c9d2a098670b49ea01b33bceb7b66c0a3f3dc11dc3bcb4ccc53de7" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.673978 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2vvgq"] Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.714023 4768 scope.go:117] "RemoveContainer" containerID="82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e" Nov 24 18:55:09 crc kubenswrapper[4768]: E1124 18:55:09.714587 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e\": container with ID starting with 82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e not found: ID does not exist" containerID="82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.714646 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e"} err="failed to get container status \"82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e\": rpc error: code = NotFound desc = could not find container \"82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e\": container with ID starting with 82737fbf4d629aaa840a05b72158c618112f5f9498bdaef13d1b33b5257bd20e not found: ID does not exist" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.714683 4768 scope.go:117] "RemoveContainer" containerID="6c57f1218e11c013df0c72d49e406984395b137f27c1ef39ec45f96f4594054f" Nov 24 18:55:09 crc kubenswrapper[4768]: E1124 18:55:09.715093 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c57f1218e11c013df0c72d49e406984395b137f27c1ef39ec45f96f4594054f\": container with ID starting with 6c57f1218e11c013df0c72d49e406984395b137f27c1ef39ec45f96f4594054f not found: ID does not exist" containerID="6c57f1218e11c013df0c72d49e406984395b137f27c1ef39ec45f96f4594054f" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.715126 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c57f1218e11c013df0c72d49e406984395b137f27c1ef39ec45f96f4594054f"} err="failed to get container status \"6c57f1218e11c013df0c72d49e406984395b137f27c1ef39ec45f96f4594054f\": rpc error: code = NotFound desc = could not find container \"6c57f1218e11c013df0c72d49e406984395b137f27c1ef39ec45f96f4594054f\": container with ID starting with 6c57f1218e11c013df0c72d49e406984395b137f27c1ef39ec45f96f4594054f not found: ID does not exist" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.715149 4768 scope.go:117] "RemoveContainer" containerID="4f60ea7b29c9d2a098670b49ea01b33bceb7b66c0a3f3dc11dc3bcb4ccc53de7" Nov 24 18:55:09 crc kubenswrapper[4768]: E1124 18:55:09.715532 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f60ea7b29c9d2a098670b49ea01b33bceb7b66c0a3f3dc11dc3bcb4ccc53de7\": container with ID starting with 4f60ea7b29c9d2a098670b49ea01b33bceb7b66c0a3f3dc11dc3bcb4ccc53de7 not found: ID does not exist" containerID="4f60ea7b29c9d2a098670b49ea01b33bceb7b66c0a3f3dc11dc3bcb4ccc53de7" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.715571 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f60ea7b29c9d2a098670b49ea01b33bceb7b66c0a3f3dc11dc3bcb4ccc53de7"} err="failed to get container status \"4f60ea7b29c9d2a098670b49ea01b33bceb7b66c0a3f3dc11dc3bcb4ccc53de7\": rpc error: code = NotFound desc = could not find container \"4f60ea7b29c9d2a098670b49ea01b33bceb7b66c0a3f3dc11dc3bcb4ccc53de7\": container with ID starting with 4f60ea7b29c9d2a098670b49ea01b33bceb7b66c0a3f3dc11dc3bcb4ccc53de7 not found: ID does not exist" Nov 24 18:55:09 crc kubenswrapper[4768]: I1124 18:55:09.912376 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36363f94-1333-4fcd-baea-6b900442ff18" path="/var/lib/kubelet/pods/36363f94-1333-4fcd-baea-6b900442ff18/volumes" Nov 24 18:55:14 crc kubenswrapper[4768]: I1124 18:55:14.898605 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:55:14 crc kubenswrapper[4768]: E1124 18:55:14.899504 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:55:28 crc kubenswrapper[4768]: I1124 18:55:28.898082 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:55:28 crc kubenswrapper[4768]: E1124 18:55:28.898973 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 18:55:36 crc kubenswrapper[4768]: E1124 18:55:36.156210 4768 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.58:45454->38.102.83.58:42411: write tcp 38.102.83.58:45454->38.102.83.58:42411: write: broken pipe Nov 24 18:55:43 crc kubenswrapper[4768]: I1124 18:55:43.898715 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:55:44 crc kubenswrapper[4768]: I1124 18:55:44.978824 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"6b3fe524df55b78a58eafd6e6ba92acc5e18774135a2707d5c571dc2e8a1d97a"} Nov 24 18:56:47 crc kubenswrapper[4768]: I1124 18:56:47.715094 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-thnqt/must-gather-ss45m" event={"ID":"f538644d-3393-4e2f-9df8-8e2ca7c01444","Type":"ContainerDied","Data":"29a5c304d31b388157e5fcd8ed984eb5fa9e9c1459b6604e0fc08a6ce551bd19"} Nov 24 18:56:47 crc kubenswrapper[4768]: I1124 18:56:47.715096 4768 generic.go:334] "Generic (PLEG): container finished" podID="f538644d-3393-4e2f-9df8-8e2ca7c01444" containerID="29a5c304d31b388157e5fcd8ed984eb5fa9e9c1459b6604e0fc08a6ce551bd19" exitCode=0 Nov 24 18:56:47 crc kubenswrapper[4768]: I1124 18:56:47.716939 4768 scope.go:117] "RemoveContainer" containerID="29a5c304d31b388157e5fcd8ed984eb5fa9e9c1459b6604e0fc08a6ce551bd19" Nov 24 18:56:48 crc kubenswrapper[4768]: I1124 18:56:48.516440 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-thnqt_must-gather-ss45m_f538644d-3393-4e2f-9df8-8e2ca7c01444/gather/0.log" Nov 24 18:56:56 crc kubenswrapper[4768]: I1124 18:56:56.002883 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-thnqt/must-gather-ss45m"] Nov 24 18:56:56 crc kubenswrapper[4768]: I1124 18:56:56.003713 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-thnqt/must-gather-ss45m" podUID="f538644d-3393-4e2f-9df8-8e2ca7c01444" containerName="copy" containerID="cri-o://a5c05b8d9734f0d0ea9da0d405cecab458aa964b5c609cf93b86d474e771d876" gracePeriod=2 Nov 24 18:56:56 crc kubenswrapper[4768]: I1124 18:56:56.013009 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-thnqt/must-gather-ss45m"] Nov 24 18:56:56 crc kubenswrapper[4768]: I1124 18:56:56.848788 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-thnqt_must-gather-ss45m_f538644d-3393-4e2f-9df8-8e2ca7c01444/copy/0.log" Nov 24 18:56:56 crc kubenswrapper[4768]: I1124 18:56:56.851050 4768 generic.go:334] "Generic (PLEG): container finished" podID="f538644d-3393-4e2f-9df8-8e2ca7c01444" containerID="a5c05b8d9734f0d0ea9da0d405cecab458aa964b5c609cf93b86d474e771d876" exitCode=143 Nov 24 18:56:56 crc kubenswrapper[4768]: I1124 18:56:56.976038 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-thnqt_must-gather-ss45m_f538644d-3393-4e2f-9df8-8e2ca7c01444/copy/0.log" Nov 24 18:56:56 crc kubenswrapper[4768]: I1124 18:56:56.976526 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/must-gather-ss45m" Nov 24 18:56:57 crc kubenswrapper[4768]: I1124 18:56:57.125931 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f538644d-3393-4e2f-9df8-8e2ca7c01444-must-gather-output\") pod \"f538644d-3393-4e2f-9df8-8e2ca7c01444\" (UID: \"f538644d-3393-4e2f-9df8-8e2ca7c01444\") " Nov 24 18:56:57 crc kubenswrapper[4768]: I1124 18:56:57.126091 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v6dv\" (UniqueName: \"kubernetes.io/projected/f538644d-3393-4e2f-9df8-8e2ca7c01444-kube-api-access-6v6dv\") pod \"f538644d-3393-4e2f-9df8-8e2ca7c01444\" (UID: \"f538644d-3393-4e2f-9df8-8e2ca7c01444\") " Nov 24 18:56:57 crc kubenswrapper[4768]: I1124 18:56:57.133882 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f538644d-3393-4e2f-9df8-8e2ca7c01444-kube-api-access-6v6dv" (OuterVolumeSpecName: "kube-api-access-6v6dv") pod "f538644d-3393-4e2f-9df8-8e2ca7c01444" (UID: "f538644d-3393-4e2f-9df8-8e2ca7c01444"). InnerVolumeSpecName "kube-api-access-6v6dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 18:56:57 crc kubenswrapper[4768]: I1124 18:56:57.228771 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v6dv\" (UniqueName: \"kubernetes.io/projected/f538644d-3393-4e2f-9df8-8e2ca7c01444-kube-api-access-6v6dv\") on node \"crc\" DevicePath \"\"" Nov 24 18:56:57 crc kubenswrapper[4768]: I1124 18:56:57.265833 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f538644d-3393-4e2f-9df8-8e2ca7c01444-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f538644d-3393-4e2f-9df8-8e2ca7c01444" (UID: "f538644d-3393-4e2f-9df8-8e2ca7c01444"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 18:56:57 crc kubenswrapper[4768]: I1124 18:56:57.330330 4768 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f538644d-3393-4e2f-9df8-8e2ca7c01444-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 18:56:57 crc kubenswrapper[4768]: I1124 18:56:57.863749 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-thnqt_must-gather-ss45m_f538644d-3393-4e2f-9df8-8e2ca7c01444/copy/0.log" Nov 24 18:56:57 crc kubenswrapper[4768]: I1124 18:56:57.864581 4768 scope.go:117] "RemoveContainer" containerID="a5c05b8d9734f0d0ea9da0d405cecab458aa964b5c609cf93b86d474e771d876" Nov 24 18:56:57 crc kubenswrapper[4768]: I1124 18:56:57.864640 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-thnqt/must-gather-ss45m" Nov 24 18:56:57 crc kubenswrapper[4768]: I1124 18:56:57.891395 4768 scope.go:117] "RemoveContainer" containerID="29a5c304d31b388157e5fcd8ed984eb5fa9e9c1459b6604e0fc08a6ce551bd19" Nov 24 18:56:57 crc kubenswrapper[4768]: I1124 18:56:57.918881 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f538644d-3393-4e2f-9df8-8e2ca7c01444" path="/var/lib/kubelet/pods/f538644d-3393-4e2f-9df8-8e2ca7c01444/volumes" Nov 24 18:58:13 crc kubenswrapper[4768]: I1124 18:58:13.656865 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:58:13 crc kubenswrapper[4768]: I1124 18:58:13.657573 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:58:43 crc kubenswrapper[4768]: I1124 18:58:43.656565 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:58:43 crc kubenswrapper[4768]: I1124 18:58:43.657101 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:59:00 crc kubenswrapper[4768]: I1124 18:59:00.853796 4768 scope.go:117] "RemoveContainer" containerID="acf43ca280f70d363974a6d7acce4d5b1a110e713584c7b276a00891a237aa17" Nov 24 18:59:13 crc kubenswrapper[4768]: I1124 18:59:13.656108 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 18:59:13 crc kubenswrapper[4768]: I1124 18:59:13.656812 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 18:59:13 crc kubenswrapper[4768]: I1124 18:59:13.656886 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 18:59:13 crc kubenswrapper[4768]: I1124 18:59:13.657830 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b3fe524df55b78a58eafd6e6ba92acc5e18774135a2707d5c571dc2e8a1d97a"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 18:59:13 crc kubenswrapper[4768]: I1124 18:59:13.657902 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://6b3fe524df55b78a58eafd6e6ba92acc5e18774135a2707d5c571dc2e8a1d97a" gracePeriod=600 Nov 24 18:59:14 crc kubenswrapper[4768]: I1124 18:59:14.371629 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="6b3fe524df55b78a58eafd6e6ba92acc5e18774135a2707d5c571dc2e8a1d97a" exitCode=0 Nov 24 18:59:14 crc kubenswrapper[4768]: I1124 18:59:14.371725 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"6b3fe524df55b78a58eafd6e6ba92acc5e18774135a2707d5c571dc2e8a1d97a"} Nov 24 18:59:14 crc kubenswrapper[4768]: I1124 18:59:14.372200 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerStarted","Data":"7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47"} Nov 24 18:59:14 crc kubenswrapper[4768]: I1124 18:59:14.372223 4768 scope.go:117] "RemoveContainer" containerID="f61c434a069acf434ca4c1be10c2a13312c5df4e38fcfe16f2433cddf4bc9a7b" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.058989 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7chgt/must-gather-t5vgx"] Nov 24 18:59:24 crc kubenswrapper[4768]: E1124 18:59:24.066612 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36363f94-1333-4fcd-baea-6b900442ff18" containerName="extract-content" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.066649 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="36363f94-1333-4fcd-baea-6b900442ff18" containerName="extract-content" Nov 24 18:59:24 crc kubenswrapper[4768]: E1124 18:59:24.066670 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36363f94-1333-4fcd-baea-6b900442ff18" containerName="registry-server" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.066680 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="36363f94-1333-4fcd-baea-6b900442ff18" containerName="registry-server" Nov 24 18:59:24 crc kubenswrapper[4768]: E1124 18:59:24.066703 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f538644d-3393-4e2f-9df8-8e2ca7c01444" containerName="copy" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.066711 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f538644d-3393-4e2f-9df8-8e2ca7c01444" containerName="copy" Nov 24 18:59:24 crc kubenswrapper[4768]: E1124 18:59:24.066727 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f538644d-3393-4e2f-9df8-8e2ca7c01444" containerName="gather" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.066735 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f538644d-3393-4e2f-9df8-8e2ca7c01444" containerName="gather" Nov 24 18:59:24 crc kubenswrapper[4768]: E1124 18:59:24.066758 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36363f94-1333-4fcd-baea-6b900442ff18" containerName="extract-utilities" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.066767 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="36363f94-1333-4fcd-baea-6b900442ff18" containerName="extract-utilities" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.067015 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="36363f94-1333-4fcd-baea-6b900442ff18" containerName="registry-server" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.067031 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f538644d-3393-4e2f-9df8-8e2ca7c01444" containerName="gather" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.067050 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f538644d-3393-4e2f-9df8-8e2ca7c01444" containerName="copy" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.068332 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/must-gather-t5vgx" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.072330 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7chgt"/"default-dockercfg-d5zzv" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.072612 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7chgt"/"openshift-service-ca.crt" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.072866 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7chgt"/"kube-root-ca.crt" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.076313 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7chgt/must-gather-t5vgx"] Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.182973 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wqlc\" (UniqueName: \"kubernetes.io/projected/d0ccf541-cacf-4978-9dee-a43cb81c501f-kube-api-access-6wqlc\") pod \"must-gather-t5vgx\" (UID: \"d0ccf541-cacf-4978-9dee-a43cb81c501f\") " pod="openshift-must-gather-7chgt/must-gather-t5vgx" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.183015 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d0ccf541-cacf-4978-9dee-a43cb81c501f-must-gather-output\") pod \"must-gather-t5vgx\" (UID: \"d0ccf541-cacf-4978-9dee-a43cb81c501f\") " pod="openshift-must-gather-7chgt/must-gather-t5vgx" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.285101 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wqlc\" (UniqueName: \"kubernetes.io/projected/d0ccf541-cacf-4978-9dee-a43cb81c501f-kube-api-access-6wqlc\") pod \"must-gather-t5vgx\" (UID: \"d0ccf541-cacf-4978-9dee-a43cb81c501f\") " pod="openshift-must-gather-7chgt/must-gather-t5vgx" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.285151 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d0ccf541-cacf-4978-9dee-a43cb81c501f-must-gather-output\") pod \"must-gather-t5vgx\" (UID: \"d0ccf541-cacf-4978-9dee-a43cb81c501f\") " pod="openshift-must-gather-7chgt/must-gather-t5vgx" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.285710 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d0ccf541-cacf-4978-9dee-a43cb81c501f-must-gather-output\") pod \"must-gather-t5vgx\" (UID: \"d0ccf541-cacf-4978-9dee-a43cb81c501f\") " pod="openshift-must-gather-7chgt/must-gather-t5vgx" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.310674 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wqlc\" (UniqueName: \"kubernetes.io/projected/d0ccf541-cacf-4978-9dee-a43cb81c501f-kube-api-access-6wqlc\") pod \"must-gather-t5vgx\" (UID: \"d0ccf541-cacf-4978-9dee-a43cb81c501f\") " pod="openshift-must-gather-7chgt/must-gather-t5vgx" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.391961 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/must-gather-t5vgx" Nov 24 18:59:24 crc kubenswrapper[4768]: I1124 18:59:24.822520 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7chgt/must-gather-t5vgx"] Nov 24 18:59:25 crc kubenswrapper[4768]: I1124 18:59:25.500779 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7chgt/must-gather-t5vgx" event={"ID":"d0ccf541-cacf-4978-9dee-a43cb81c501f","Type":"ContainerStarted","Data":"f1038efc18d46a85e5e271ae8eb29d475d0a848342fe92ca098e05aee9d7d04c"} Nov 24 18:59:25 crc kubenswrapper[4768]: I1124 18:59:25.501150 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7chgt/must-gather-t5vgx" event={"ID":"d0ccf541-cacf-4978-9dee-a43cb81c501f","Type":"ContainerStarted","Data":"0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91"} Nov 24 18:59:25 crc kubenswrapper[4768]: I1124 18:59:25.501162 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7chgt/must-gather-t5vgx" event={"ID":"d0ccf541-cacf-4978-9dee-a43cb81c501f","Type":"ContainerStarted","Data":"d25de59cb46550efb9378cc87be9bcf09803a92d3a9474bb3d6672d9288dcaa0"} Nov 24 18:59:25 crc kubenswrapper[4768]: I1124 18:59:25.522380 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7chgt/must-gather-t5vgx" podStartSLOduration=1.5223607239999999 podStartE2EDuration="1.522360724s" podCreationTimestamp="2025-11-24 18:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:59:25.51589565 +0000 UTC m=+4204.376477437" watchObservedRunningTime="2025-11-24 18:59:25.522360724 +0000 UTC m=+4204.382942511" Nov 24 18:59:28 crc kubenswrapper[4768]: E1124 18:59:28.149742 4768 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.58:37486->38.102.83.58:42411: read tcp 38.102.83.58:37486->38.102.83.58:42411: read: connection reset by peer Nov 24 18:59:29 crc kubenswrapper[4768]: I1124 18:59:29.090028 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7chgt/crc-debug-6krc2"] Nov 24 18:59:29 crc kubenswrapper[4768]: I1124 18:59:29.091869 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/crc-debug-6krc2" Nov 24 18:59:29 crc kubenswrapper[4768]: I1124 18:59:29.180959 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fce9f88-3286-4c40-b5b9-f8e3ada17b53-host\") pod \"crc-debug-6krc2\" (UID: \"3fce9f88-3286-4c40-b5b9-f8e3ada17b53\") " pod="openshift-must-gather-7chgt/crc-debug-6krc2" Nov 24 18:59:29 crc kubenswrapper[4768]: I1124 18:59:29.181147 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np54n\" (UniqueName: \"kubernetes.io/projected/3fce9f88-3286-4c40-b5b9-f8e3ada17b53-kube-api-access-np54n\") pod \"crc-debug-6krc2\" (UID: \"3fce9f88-3286-4c40-b5b9-f8e3ada17b53\") " pod="openshift-must-gather-7chgt/crc-debug-6krc2" Nov 24 18:59:29 crc kubenswrapper[4768]: I1124 18:59:29.282521 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np54n\" (UniqueName: \"kubernetes.io/projected/3fce9f88-3286-4c40-b5b9-f8e3ada17b53-kube-api-access-np54n\") pod \"crc-debug-6krc2\" (UID: \"3fce9f88-3286-4c40-b5b9-f8e3ada17b53\") " pod="openshift-must-gather-7chgt/crc-debug-6krc2" Nov 24 18:59:29 crc kubenswrapper[4768]: I1124 18:59:29.283017 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fce9f88-3286-4c40-b5b9-f8e3ada17b53-host\") pod \"crc-debug-6krc2\" (UID: \"3fce9f88-3286-4c40-b5b9-f8e3ada17b53\") " pod="openshift-must-gather-7chgt/crc-debug-6krc2" Nov 24 18:59:29 crc kubenswrapper[4768]: I1124 18:59:29.283153 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fce9f88-3286-4c40-b5b9-f8e3ada17b53-host\") pod \"crc-debug-6krc2\" (UID: \"3fce9f88-3286-4c40-b5b9-f8e3ada17b53\") " pod="openshift-must-gather-7chgt/crc-debug-6krc2" Nov 24 18:59:29 crc kubenswrapper[4768]: I1124 18:59:29.302649 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np54n\" (UniqueName: \"kubernetes.io/projected/3fce9f88-3286-4c40-b5b9-f8e3ada17b53-kube-api-access-np54n\") pod \"crc-debug-6krc2\" (UID: \"3fce9f88-3286-4c40-b5b9-f8e3ada17b53\") " pod="openshift-must-gather-7chgt/crc-debug-6krc2" Nov 24 18:59:29 crc kubenswrapper[4768]: I1124 18:59:29.414806 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/crc-debug-6krc2" Nov 24 18:59:29 crc kubenswrapper[4768]: W1124 18:59:29.452805 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fce9f88_3286_4c40_b5b9_f8e3ada17b53.slice/crio-6bbc3dcc4b8f4823da6e870949f09384eb3bc14b0b72a32fb961c370c16e3332 WatchSource:0}: Error finding container 6bbc3dcc4b8f4823da6e870949f09384eb3bc14b0b72a32fb961c370c16e3332: Status 404 returned error can't find the container with id 6bbc3dcc4b8f4823da6e870949f09384eb3bc14b0b72a32fb961c370c16e3332 Nov 24 18:59:29 crc kubenswrapper[4768]: I1124 18:59:29.538861 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7chgt/crc-debug-6krc2" event={"ID":"3fce9f88-3286-4c40-b5b9-f8e3ada17b53","Type":"ContainerStarted","Data":"6bbc3dcc4b8f4823da6e870949f09384eb3bc14b0b72a32fb961c370c16e3332"} Nov 24 18:59:30 crc kubenswrapper[4768]: I1124 18:59:30.554506 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7chgt/crc-debug-6krc2" event={"ID":"3fce9f88-3286-4c40-b5b9-f8e3ada17b53","Type":"ContainerStarted","Data":"658ca2c941d98bc2e8469f1e154c4d13c447062b1f3fb3a390141707551c875a"} Nov 24 18:59:30 crc kubenswrapper[4768]: I1124 18:59:30.573054 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7chgt/crc-debug-6krc2" podStartSLOduration=1.573034205 podStartE2EDuration="1.573034205s" podCreationTimestamp="2025-11-24 18:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 18:59:30.565075181 +0000 UTC m=+4209.425656968" watchObservedRunningTime="2025-11-24 18:59:30.573034205 +0000 UTC m=+4209.433615972" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.154255 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq"] Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.156163 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.164878 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.165261 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.176224 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq"] Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.209918 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2923e6d-f6b3-4279-a187-068ffd4cfd33-config-volume\") pod \"collect-profiles-29400180-5rhrq\" (UID: \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.209971 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94g6n\" (UniqueName: \"kubernetes.io/projected/c2923e6d-f6b3-4279-a187-068ffd4cfd33-kube-api-access-94g6n\") pod \"collect-profiles-29400180-5rhrq\" (UID: \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.210018 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2923e6d-f6b3-4279-a187-068ffd4cfd33-secret-volume\") pod \"collect-profiles-29400180-5rhrq\" (UID: \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.311806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2923e6d-f6b3-4279-a187-068ffd4cfd33-config-volume\") pod \"collect-profiles-29400180-5rhrq\" (UID: \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.311848 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94g6n\" (UniqueName: \"kubernetes.io/projected/c2923e6d-f6b3-4279-a187-068ffd4cfd33-kube-api-access-94g6n\") pod \"collect-profiles-29400180-5rhrq\" (UID: \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.311895 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2923e6d-f6b3-4279-a187-068ffd4cfd33-secret-volume\") pod \"collect-profiles-29400180-5rhrq\" (UID: \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.313086 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2923e6d-f6b3-4279-a187-068ffd4cfd33-config-volume\") pod \"collect-profiles-29400180-5rhrq\" (UID: \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.321380 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2923e6d-f6b3-4279-a187-068ffd4cfd33-secret-volume\") pod \"collect-profiles-29400180-5rhrq\" (UID: \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.331945 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94g6n\" (UniqueName: \"kubernetes.io/projected/c2923e6d-f6b3-4279-a187-068ffd4cfd33-kube-api-access-94g6n\") pod \"collect-profiles-29400180-5rhrq\" (UID: \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.485028 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.839325 4768 generic.go:334] "Generic (PLEG): container finished" podID="3fce9f88-3286-4c40-b5b9-f8e3ada17b53" containerID="658ca2c941d98bc2e8469f1e154c4d13c447062b1f3fb3a390141707551c875a" exitCode=0 Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.839419 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7chgt/crc-debug-6krc2" event={"ID":"3fce9f88-3286-4c40-b5b9-f8e3ada17b53","Type":"ContainerDied","Data":"658ca2c941d98bc2e8469f1e154c4d13c447062b1f3fb3a390141707551c875a"} Nov 24 19:00:00 crc kubenswrapper[4768]: I1124 19:00:00.983031 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq"] Nov 24 19:00:01 crc kubenswrapper[4768]: I1124 19:00:01.847848 4768 generic.go:334] "Generic (PLEG): container finished" podID="c2923e6d-f6b3-4279-a187-068ffd4cfd33" containerID="f33eb2d37190307d34ab9d6abd7fbbf610e20adfbc8cada9c64b0bb9d079da05" exitCode=0 Nov 24 19:00:01 crc kubenswrapper[4768]: I1124 19:00:01.848602 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" event={"ID":"c2923e6d-f6b3-4279-a187-068ffd4cfd33","Type":"ContainerDied","Data":"f33eb2d37190307d34ab9d6abd7fbbf610e20adfbc8cada9c64b0bb9d079da05"} Nov 24 19:00:01 crc kubenswrapper[4768]: I1124 19:00:01.848627 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" event={"ID":"c2923e6d-f6b3-4279-a187-068ffd4cfd33","Type":"ContainerStarted","Data":"ad26752eeed256f5925f62ff479670548ff9f77614075f9cbcef9c3e82d193ae"} Nov 24 19:00:01 crc kubenswrapper[4768]: I1124 19:00:01.939336 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/crc-debug-6krc2" Nov 24 19:00:01 crc kubenswrapper[4768]: I1124 19:00:01.973517 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7chgt/crc-debug-6krc2"] Nov 24 19:00:01 crc kubenswrapper[4768]: I1124 19:00:01.980598 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7chgt/crc-debug-6krc2"] Nov 24 19:00:02 crc kubenswrapper[4768]: I1124 19:00:02.047698 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np54n\" (UniqueName: \"kubernetes.io/projected/3fce9f88-3286-4c40-b5b9-f8e3ada17b53-kube-api-access-np54n\") pod \"3fce9f88-3286-4c40-b5b9-f8e3ada17b53\" (UID: \"3fce9f88-3286-4c40-b5b9-f8e3ada17b53\") " Nov 24 19:00:02 crc kubenswrapper[4768]: I1124 19:00:02.047944 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fce9f88-3286-4c40-b5b9-f8e3ada17b53-host\") pod \"3fce9f88-3286-4c40-b5b9-f8e3ada17b53\" (UID: \"3fce9f88-3286-4c40-b5b9-f8e3ada17b53\") " Nov 24 19:00:02 crc kubenswrapper[4768]: I1124 19:00:02.048023 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fce9f88-3286-4c40-b5b9-f8e3ada17b53-host" (OuterVolumeSpecName: "host") pod "3fce9f88-3286-4c40-b5b9-f8e3ada17b53" (UID: "3fce9f88-3286-4c40-b5b9-f8e3ada17b53"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:00:02 crc kubenswrapper[4768]: I1124 19:00:02.048588 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fce9f88-3286-4c40-b5b9-f8e3ada17b53-host\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:02 crc kubenswrapper[4768]: I1124 19:00:02.053608 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fce9f88-3286-4c40-b5b9-f8e3ada17b53-kube-api-access-np54n" (OuterVolumeSpecName: "kube-api-access-np54n") pod "3fce9f88-3286-4c40-b5b9-f8e3ada17b53" (UID: "3fce9f88-3286-4c40-b5b9-f8e3ada17b53"). InnerVolumeSpecName "kube-api-access-np54n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:00:02 crc kubenswrapper[4768]: I1124 19:00:02.150085 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np54n\" (UniqueName: \"kubernetes.io/projected/3fce9f88-3286-4c40-b5b9-f8e3ada17b53-kube-api-access-np54n\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:02 crc kubenswrapper[4768]: I1124 19:00:02.856912 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bbc3dcc4b8f4823da6e870949f09384eb3bc14b0b72a32fb961c370c16e3332" Nov 24 19:00:02 crc kubenswrapper[4768]: I1124 19:00:02.856961 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/crc-debug-6krc2" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.195818 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7chgt/crc-debug-f9sdh"] Nov 24 19:00:03 crc kubenswrapper[4768]: E1124 19:00:03.196715 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fce9f88-3286-4c40-b5b9-f8e3ada17b53" containerName="container-00" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.196731 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fce9f88-3286-4c40-b5b9-f8e3ada17b53" containerName="container-00" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.197169 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fce9f88-3286-4c40-b5b9-f8e3ada17b53" containerName="container-00" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.198088 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/crc-debug-f9sdh" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.276184 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q5bs\" (UniqueName: \"kubernetes.io/projected/f9c7d332-c52b-4743-9da3-2bfe7e7f2078-kube-api-access-5q5bs\") pod \"crc-debug-f9sdh\" (UID: \"f9c7d332-c52b-4743-9da3-2bfe7e7f2078\") " pod="openshift-must-gather-7chgt/crc-debug-f9sdh" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.276275 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c7d332-c52b-4743-9da3-2bfe7e7f2078-host\") pod \"crc-debug-f9sdh\" (UID: \"f9c7d332-c52b-4743-9da3-2bfe7e7f2078\") " pod="openshift-must-gather-7chgt/crc-debug-f9sdh" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.378716 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c7d332-c52b-4743-9da3-2bfe7e7f2078-host\") pod \"crc-debug-f9sdh\" (UID: \"f9c7d332-c52b-4743-9da3-2bfe7e7f2078\") " pod="openshift-must-gather-7chgt/crc-debug-f9sdh" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.378862 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c7d332-c52b-4743-9da3-2bfe7e7f2078-host\") pod \"crc-debug-f9sdh\" (UID: \"f9c7d332-c52b-4743-9da3-2bfe7e7f2078\") " pod="openshift-must-gather-7chgt/crc-debug-f9sdh" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.378996 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q5bs\" (UniqueName: \"kubernetes.io/projected/f9c7d332-c52b-4743-9da3-2bfe7e7f2078-kube-api-access-5q5bs\") pod \"crc-debug-f9sdh\" (UID: \"f9c7d332-c52b-4743-9da3-2bfe7e7f2078\") " pod="openshift-must-gather-7chgt/crc-debug-f9sdh" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.690714 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q5bs\" (UniqueName: \"kubernetes.io/projected/f9c7d332-c52b-4743-9da3-2bfe7e7f2078-kube-api-access-5q5bs\") pod \"crc-debug-f9sdh\" (UID: \"f9c7d332-c52b-4743-9da3-2bfe7e7f2078\") " pod="openshift-must-gather-7chgt/crc-debug-f9sdh" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.816957 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.837045 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/crc-debug-f9sdh" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.888273 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2923e6d-f6b3-4279-a187-068ffd4cfd33-config-volume\") pod \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\" (UID: \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\") " Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.888317 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2923e6d-f6b3-4279-a187-068ffd4cfd33-secret-volume\") pod \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\" (UID: \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\") " Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.888543 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94g6n\" (UniqueName: \"kubernetes.io/projected/c2923e6d-f6b3-4279-a187-068ffd4cfd33-kube-api-access-94g6n\") pod \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\" (UID: \"c2923e6d-f6b3-4279-a187-068ffd4cfd33\") " Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.889365 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2923e6d-f6b3-4279-a187-068ffd4cfd33-config-volume" (OuterVolumeSpecName: "config-volume") pod "c2923e6d-f6b3-4279-a187-068ffd4cfd33" (UID: "c2923e6d-f6b3-4279-a187-068ffd4cfd33"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.898586 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2923e6d-f6b3-4279-a187-068ffd4cfd33-kube-api-access-94g6n" (OuterVolumeSpecName: "kube-api-access-94g6n") pod "c2923e6d-f6b3-4279-a187-068ffd4cfd33" (UID: "c2923e6d-f6b3-4279-a187-068ffd4cfd33"). InnerVolumeSpecName "kube-api-access-94g6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.905866 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.906680 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2923e6d-f6b3-4279-a187-068ffd4cfd33-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c2923e6d-f6b3-4279-a187-068ffd4cfd33" (UID: "c2923e6d-f6b3-4279-a187-068ffd4cfd33"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.923921 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fce9f88-3286-4c40-b5b9-f8e3ada17b53" path="/var/lib/kubelet/pods/3fce9f88-3286-4c40-b5b9-f8e3ada17b53/volumes" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.924804 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400180-5rhrq" event={"ID":"c2923e6d-f6b3-4279-a187-068ffd4cfd33","Type":"ContainerDied","Data":"ad26752eeed256f5925f62ff479670548ff9f77614075f9cbcef9c3e82d193ae"} Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.924838 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad26752eeed256f5925f62ff479670548ff9f77614075f9cbcef9c3e82d193ae" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.924852 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7chgt/crc-debug-f9sdh" event={"ID":"f9c7d332-c52b-4743-9da3-2bfe7e7f2078","Type":"ContainerStarted","Data":"1a13cd51de4b618af5a748c991137b16149f1fc89af6d09ea7e1c18837c7adad"} Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.990629 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94g6n\" (UniqueName: \"kubernetes.io/projected/c2923e6d-f6b3-4279-a187-068ffd4cfd33-kube-api-access-94g6n\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.990660 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2923e6d-f6b3-4279-a187-068ffd4cfd33-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:03 crc kubenswrapper[4768]: I1124 19:00:03.990669 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2923e6d-f6b3-4279-a187-068ffd4cfd33-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:04 crc kubenswrapper[4768]: I1124 19:00:04.903890 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml"] Nov 24 19:00:04 crc kubenswrapper[4768]: I1124 19:00:04.912966 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400135-mb4ml"] Nov 24 19:00:04 crc kubenswrapper[4768]: I1124 19:00:04.933054 4768 generic.go:334] "Generic (PLEG): container finished" podID="f9c7d332-c52b-4743-9da3-2bfe7e7f2078" containerID="bb2e731fd1fe758eadaa555cf34c1563ea661e7ce8f0bfd76bd918b86bfd6b12" exitCode=0 Nov 24 19:00:04 crc kubenswrapper[4768]: I1124 19:00:04.933161 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7chgt/crc-debug-f9sdh" event={"ID":"f9c7d332-c52b-4743-9da3-2bfe7e7f2078","Type":"ContainerDied","Data":"bb2e731fd1fe758eadaa555cf34c1563ea661e7ce8f0bfd76bd918b86bfd6b12"} Nov 24 19:00:05 crc kubenswrapper[4768]: I1124 19:00:05.357229 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7chgt/crc-debug-f9sdh"] Nov 24 19:00:05 crc kubenswrapper[4768]: I1124 19:00:05.364287 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7chgt/crc-debug-f9sdh"] Nov 24 19:00:05 crc kubenswrapper[4768]: I1124 19:00:05.911670 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d0753ff-e850-4c66-9e08-c71fe7a86f1d" path="/var/lib/kubelet/pods/3d0753ff-e850-4c66-9e08-c71fe7a86f1d/volumes" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.031617 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/crc-debug-f9sdh" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.127043 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q5bs\" (UniqueName: \"kubernetes.io/projected/f9c7d332-c52b-4743-9da3-2bfe7e7f2078-kube-api-access-5q5bs\") pod \"f9c7d332-c52b-4743-9da3-2bfe7e7f2078\" (UID: \"f9c7d332-c52b-4743-9da3-2bfe7e7f2078\") " Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.127174 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c7d332-c52b-4743-9da3-2bfe7e7f2078-host\") pod \"f9c7d332-c52b-4743-9da3-2bfe7e7f2078\" (UID: \"f9c7d332-c52b-4743-9da3-2bfe7e7f2078\") " Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.127277 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c7d332-c52b-4743-9da3-2bfe7e7f2078-host" (OuterVolumeSpecName: "host") pod "f9c7d332-c52b-4743-9da3-2bfe7e7f2078" (UID: "f9c7d332-c52b-4743-9da3-2bfe7e7f2078"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.127910 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c7d332-c52b-4743-9da3-2bfe7e7f2078-host\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.138714 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c7d332-c52b-4743-9da3-2bfe7e7f2078-kube-api-access-5q5bs" (OuterVolumeSpecName: "kube-api-access-5q5bs") pod "f9c7d332-c52b-4743-9da3-2bfe7e7f2078" (UID: "f9c7d332-c52b-4743-9da3-2bfe7e7f2078"). InnerVolumeSpecName "kube-api-access-5q5bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.229985 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q5bs\" (UniqueName: \"kubernetes.io/projected/f9c7d332-c52b-4743-9da3-2bfe7e7f2078-kube-api-access-5q5bs\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.810753 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7chgt/crc-debug-f49pd"] Nov 24 19:00:06 crc kubenswrapper[4768]: E1124 19:00:06.811450 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2923e6d-f6b3-4279-a187-068ffd4cfd33" containerName="collect-profiles" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.811466 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2923e6d-f6b3-4279-a187-068ffd4cfd33" containerName="collect-profiles" Nov 24 19:00:06 crc kubenswrapper[4768]: E1124 19:00:06.811501 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c7d332-c52b-4743-9da3-2bfe7e7f2078" containerName="container-00" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.811508 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c7d332-c52b-4743-9da3-2bfe7e7f2078" containerName="container-00" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.828678 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c7d332-c52b-4743-9da3-2bfe7e7f2078" containerName="container-00" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.828782 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2923e6d-f6b3-4279-a187-068ffd4cfd33" containerName="collect-profiles" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.830023 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/crc-debug-f49pd" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.942090 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46l5m\" (UniqueName: \"kubernetes.io/projected/f8951cd2-243b-494c-9a7b-cb144201eef0-kube-api-access-46l5m\") pod \"crc-debug-f49pd\" (UID: \"f8951cd2-243b-494c-9a7b-cb144201eef0\") " pod="openshift-must-gather-7chgt/crc-debug-f49pd" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.942274 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8951cd2-243b-494c-9a7b-cb144201eef0-host\") pod \"crc-debug-f49pd\" (UID: \"f8951cd2-243b-494c-9a7b-cb144201eef0\") " pod="openshift-must-gather-7chgt/crc-debug-f49pd" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.953097 4768 scope.go:117] "RemoveContainer" containerID="bb2e731fd1fe758eadaa555cf34c1563ea661e7ce8f0bfd76bd918b86bfd6b12" Nov 24 19:00:06 crc kubenswrapper[4768]: I1124 19:00:06.953169 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/crc-debug-f9sdh" Nov 24 19:00:07 crc kubenswrapper[4768]: I1124 19:00:07.044767 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46l5m\" (UniqueName: \"kubernetes.io/projected/f8951cd2-243b-494c-9a7b-cb144201eef0-kube-api-access-46l5m\") pod \"crc-debug-f49pd\" (UID: \"f8951cd2-243b-494c-9a7b-cb144201eef0\") " pod="openshift-must-gather-7chgt/crc-debug-f49pd" Nov 24 19:00:07 crc kubenswrapper[4768]: I1124 19:00:07.044972 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8951cd2-243b-494c-9a7b-cb144201eef0-host\") pod \"crc-debug-f49pd\" (UID: \"f8951cd2-243b-494c-9a7b-cb144201eef0\") " pod="openshift-must-gather-7chgt/crc-debug-f49pd" Nov 24 19:00:07 crc kubenswrapper[4768]: I1124 19:00:07.046291 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8951cd2-243b-494c-9a7b-cb144201eef0-host\") pod \"crc-debug-f49pd\" (UID: \"f8951cd2-243b-494c-9a7b-cb144201eef0\") " pod="openshift-must-gather-7chgt/crc-debug-f49pd" Nov 24 19:00:07 crc kubenswrapper[4768]: I1124 19:00:07.065742 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46l5m\" (UniqueName: \"kubernetes.io/projected/f8951cd2-243b-494c-9a7b-cb144201eef0-kube-api-access-46l5m\") pod \"crc-debug-f49pd\" (UID: \"f8951cd2-243b-494c-9a7b-cb144201eef0\") " pod="openshift-must-gather-7chgt/crc-debug-f49pd" Nov 24 19:00:07 crc kubenswrapper[4768]: I1124 19:00:07.149679 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/crc-debug-f49pd" Nov 24 19:00:07 crc kubenswrapper[4768]: W1124 19:00:07.178107 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8951cd2_243b_494c_9a7b_cb144201eef0.slice/crio-6619c948e07b2c0190fac58d031f67d15c854a2dc69c8dfb43c5eab1a0dd6724 WatchSource:0}: Error finding container 6619c948e07b2c0190fac58d031f67d15c854a2dc69c8dfb43c5eab1a0dd6724: Status 404 returned error can't find the container with id 6619c948e07b2c0190fac58d031f67d15c854a2dc69c8dfb43c5eab1a0dd6724 Nov 24 19:00:07 crc kubenswrapper[4768]: I1124 19:00:07.920085 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c7d332-c52b-4743-9da3-2bfe7e7f2078" path="/var/lib/kubelet/pods/f9c7d332-c52b-4743-9da3-2bfe7e7f2078/volumes" Nov 24 19:00:07 crc kubenswrapper[4768]: I1124 19:00:07.963393 4768 generic.go:334] "Generic (PLEG): container finished" podID="f8951cd2-243b-494c-9a7b-cb144201eef0" containerID="31a374afb33cd12f59fbc3fa7afa249c9adb9b552bca4240e539e4903b06b340" exitCode=0 Nov 24 19:00:07 crc kubenswrapper[4768]: I1124 19:00:07.963542 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7chgt/crc-debug-f49pd" event={"ID":"f8951cd2-243b-494c-9a7b-cb144201eef0","Type":"ContainerDied","Data":"31a374afb33cd12f59fbc3fa7afa249c9adb9b552bca4240e539e4903b06b340"} Nov 24 19:00:07 crc kubenswrapper[4768]: I1124 19:00:07.963606 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7chgt/crc-debug-f49pd" event={"ID":"f8951cd2-243b-494c-9a7b-cb144201eef0","Type":"ContainerStarted","Data":"6619c948e07b2c0190fac58d031f67d15c854a2dc69c8dfb43c5eab1a0dd6724"} Nov 24 19:00:07 crc kubenswrapper[4768]: I1124 19:00:07.997782 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7chgt/crc-debug-f49pd"] Nov 24 19:00:08 crc kubenswrapper[4768]: I1124 19:00:08.012427 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7chgt/crc-debug-f49pd"] Nov 24 19:00:09 crc kubenswrapper[4768]: I1124 19:00:09.126925 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/crc-debug-f49pd" Nov 24 19:00:09 crc kubenswrapper[4768]: I1124 19:00:09.191068 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8951cd2-243b-494c-9a7b-cb144201eef0-host\") pod \"f8951cd2-243b-494c-9a7b-cb144201eef0\" (UID: \"f8951cd2-243b-494c-9a7b-cb144201eef0\") " Nov 24 19:00:09 crc kubenswrapper[4768]: I1124 19:00:09.191246 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46l5m\" (UniqueName: \"kubernetes.io/projected/f8951cd2-243b-494c-9a7b-cb144201eef0-kube-api-access-46l5m\") pod \"f8951cd2-243b-494c-9a7b-cb144201eef0\" (UID: \"f8951cd2-243b-494c-9a7b-cb144201eef0\") " Nov 24 19:00:09 crc kubenswrapper[4768]: I1124 19:00:09.191288 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8951cd2-243b-494c-9a7b-cb144201eef0-host" (OuterVolumeSpecName: "host") pod "f8951cd2-243b-494c-9a7b-cb144201eef0" (UID: "f8951cd2-243b-494c-9a7b-cb144201eef0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:00:09 crc kubenswrapper[4768]: I1124 19:00:09.191798 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8951cd2-243b-494c-9a7b-cb144201eef0-host\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:09 crc kubenswrapper[4768]: I1124 19:00:09.201001 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8951cd2-243b-494c-9a7b-cb144201eef0-kube-api-access-46l5m" (OuterVolumeSpecName: "kube-api-access-46l5m") pod "f8951cd2-243b-494c-9a7b-cb144201eef0" (UID: "f8951cd2-243b-494c-9a7b-cb144201eef0"). InnerVolumeSpecName "kube-api-access-46l5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:00:09 crc kubenswrapper[4768]: I1124 19:00:09.293826 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46l5m\" (UniqueName: \"kubernetes.io/projected/f8951cd2-243b-494c-9a7b-cb144201eef0-kube-api-access-46l5m\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:09 crc kubenswrapper[4768]: I1124 19:00:09.913723 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8951cd2-243b-494c-9a7b-cb144201eef0" path="/var/lib/kubelet/pods/f8951cd2-243b-494c-9a7b-cb144201eef0/volumes" Nov 24 19:00:09 crc kubenswrapper[4768]: I1124 19:00:09.992150 4768 scope.go:117] "RemoveContainer" containerID="31a374afb33cd12f59fbc3fa7afa249c9adb9b552bca4240e539e4903b06b340" Nov 24 19:00:09 crc kubenswrapper[4768]: I1124 19:00:09.992289 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/crc-debug-f49pd" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.284933 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hpbkl"] Nov 24 19:00:10 crc kubenswrapper[4768]: E1124 19:00:10.286181 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8951cd2-243b-494c-9a7b-cb144201eef0" containerName="container-00" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.286203 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8951cd2-243b-494c-9a7b-cb144201eef0" containerName="container-00" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.286688 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8951cd2-243b-494c-9a7b-cb144201eef0" containerName="container-00" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.298196 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.329098 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hpbkl"] Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.419215 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-utilities\") pod \"community-operators-hpbkl\" (UID: \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\") " pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.419282 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-catalog-content\") pod \"community-operators-hpbkl\" (UID: \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\") " pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.419360 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49d6n\" (UniqueName: \"kubernetes.io/projected/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-kube-api-access-49d6n\") pod \"community-operators-hpbkl\" (UID: \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\") " pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.521306 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-utilities\") pod \"community-operators-hpbkl\" (UID: \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\") " pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.521390 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-catalog-content\") pod \"community-operators-hpbkl\" (UID: \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\") " pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.521507 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49d6n\" (UniqueName: \"kubernetes.io/projected/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-kube-api-access-49d6n\") pod \"community-operators-hpbkl\" (UID: \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\") " pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.521886 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-utilities\") pod \"community-operators-hpbkl\" (UID: \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\") " pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.522244 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-catalog-content\") pod \"community-operators-hpbkl\" (UID: \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\") " pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.553273 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49d6n\" (UniqueName: \"kubernetes.io/projected/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-kube-api-access-49d6n\") pod \"community-operators-hpbkl\" (UID: \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\") " pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.635475 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.882181 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5sgv9"] Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.884320 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:10 crc kubenswrapper[4768]: I1124 19:00:10.896559 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5sgv9"] Nov 24 19:00:11 crc kubenswrapper[4768]: I1124 19:00:11.030904 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9gdt\" (UniqueName: \"kubernetes.io/projected/871f450b-7842-4f3c-bb84-7441248278c3-kube-api-access-w9gdt\") pod \"redhat-operators-5sgv9\" (UID: \"871f450b-7842-4f3c-bb84-7441248278c3\") " pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:11 crc kubenswrapper[4768]: I1124 19:00:11.030985 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871f450b-7842-4f3c-bb84-7441248278c3-utilities\") pod \"redhat-operators-5sgv9\" (UID: \"871f450b-7842-4f3c-bb84-7441248278c3\") " pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:11 crc kubenswrapper[4768]: I1124 19:00:11.031041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871f450b-7842-4f3c-bb84-7441248278c3-catalog-content\") pod \"redhat-operators-5sgv9\" (UID: \"871f450b-7842-4f3c-bb84-7441248278c3\") " pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:11 crc kubenswrapper[4768]: I1124 19:00:11.107910 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hpbkl"] Nov 24 19:00:11 crc kubenswrapper[4768]: W1124 19:00:11.110754 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70a18fd1_762d_42f6_9d65_6ebbc5cfdd2f.slice/crio-5c1d099787ed6bae9c303a49a9c9fdcbd8c4632773ad10942999b85102687bf8 WatchSource:0}: Error finding container 5c1d099787ed6bae9c303a49a9c9fdcbd8c4632773ad10942999b85102687bf8: Status 404 returned error can't find the container with id 5c1d099787ed6bae9c303a49a9c9fdcbd8c4632773ad10942999b85102687bf8 Nov 24 19:00:11 crc kubenswrapper[4768]: I1124 19:00:11.132463 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871f450b-7842-4f3c-bb84-7441248278c3-catalog-content\") pod \"redhat-operators-5sgv9\" (UID: \"871f450b-7842-4f3c-bb84-7441248278c3\") " pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:11 crc kubenswrapper[4768]: I1124 19:00:11.132626 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9gdt\" (UniqueName: \"kubernetes.io/projected/871f450b-7842-4f3c-bb84-7441248278c3-kube-api-access-w9gdt\") pod \"redhat-operators-5sgv9\" (UID: \"871f450b-7842-4f3c-bb84-7441248278c3\") " pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:11 crc kubenswrapper[4768]: I1124 19:00:11.132675 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871f450b-7842-4f3c-bb84-7441248278c3-utilities\") pod \"redhat-operators-5sgv9\" (UID: \"871f450b-7842-4f3c-bb84-7441248278c3\") " pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:11 crc kubenswrapper[4768]: I1124 19:00:11.132899 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871f450b-7842-4f3c-bb84-7441248278c3-catalog-content\") pod \"redhat-operators-5sgv9\" (UID: \"871f450b-7842-4f3c-bb84-7441248278c3\") " pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:11 crc kubenswrapper[4768]: I1124 19:00:11.132990 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871f450b-7842-4f3c-bb84-7441248278c3-utilities\") pod \"redhat-operators-5sgv9\" (UID: \"871f450b-7842-4f3c-bb84-7441248278c3\") " pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:11 crc kubenswrapper[4768]: I1124 19:00:11.165917 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9gdt\" (UniqueName: \"kubernetes.io/projected/871f450b-7842-4f3c-bb84-7441248278c3-kube-api-access-w9gdt\") pod \"redhat-operators-5sgv9\" (UID: \"871f450b-7842-4f3c-bb84-7441248278c3\") " pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:11 crc kubenswrapper[4768]: I1124 19:00:11.211453 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:11 crc kubenswrapper[4768]: I1124 19:00:11.642182 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5sgv9"] Nov 24 19:00:11 crc kubenswrapper[4768]: W1124 19:00:11.644094 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod871f450b_7842_4f3c_bb84_7441248278c3.slice/crio-47ab1d92a690f4765a97b0ff4201ed96548b40a6051827fecfdbfe9c946e81ff WatchSource:0}: Error finding container 47ab1d92a690f4765a97b0ff4201ed96548b40a6051827fecfdbfe9c946e81ff: Status 404 returned error can't find the container with id 47ab1d92a690f4765a97b0ff4201ed96548b40a6051827fecfdbfe9c946e81ff Nov 24 19:00:12 crc kubenswrapper[4768]: I1124 19:00:12.008618 4768 generic.go:334] "Generic (PLEG): container finished" podID="70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" containerID="9f62f2aea009a6ec14587309f94eb9976a90e650a89456dbaa15a18904c10157" exitCode=0 Nov 24 19:00:12 crc kubenswrapper[4768]: I1124 19:00:12.008663 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpbkl" event={"ID":"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f","Type":"ContainerDied","Data":"9f62f2aea009a6ec14587309f94eb9976a90e650a89456dbaa15a18904c10157"} Nov 24 19:00:12 crc kubenswrapper[4768]: I1124 19:00:12.008713 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpbkl" event={"ID":"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f","Type":"ContainerStarted","Data":"5c1d099787ed6bae9c303a49a9c9fdcbd8c4632773ad10942999b85102687bf8"} Nov 24 19:00:12 crc kubenswrapper[4768]: I1124 19:00:12.010546 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 19:00:12 crc kubenswrapper[4768]: I1124 19:00:12.010561 4768 generic.go:334] "Generic (PLEG): container finished" podID="871f450b-7842-4f3c-bb84-7441248278c3" containerID="b633768dcce1cb1cd8a300fe00a432e9c8af85c9d96c3900a38b1b72fd61cca4" exitCode=0 Nov 24 19:00:12 crc kubenswrapper[4768]: I1124 19:00:12.010589 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sgv9" event={"ID":"871f450b-7842-4f3c-bb84-7441248278c3","Type":"ContainerDied","Data":"b633768dcce1cb1cd8a300fe00a432e9c8af85c9d96c3900a38b1b72fd61cca4"} Nov 24 19:00:12 crc kubenswrapper[4768]: I1124 19:00:12.010609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sgv9" event={"ID":"871f450b-7842-4f3c-bb84-7441248278c3","Type":"ContainerStarted","Data":"47ab1d92a690f4765a97b0ff4201ed96548b40a6051827fecfdbfe9c946e81ff"} Nov 24 19:00:13 crc kubenswrapper[4768]: I1124 19:00:13.043294 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpbkl" event={"ID":"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f","Type":"ContainerStarted","Data":"178656c0be878279b18cd182b89559d54ba4ebcfc1b3645f109d2aa1d4a08ba9"} Nov 24 19:00:14 crc kubenswrapper[4768]: I1124 19:00:14.053574 4768 generic.go:334] "Generic (PLEG): container finished" podID="70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" containerID="178656c0be878279b18cd182b89559d54ba4ebcfc1b3645f109d2aa1d4a08ba9" exitCode=0 Nov 24 19:00:14 crc kubenswrapper[4768]: I1124 19:00:14.053632 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpbkl" event={"ID":"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f","Type":"ContainerDied","Data":"178656c0be878279b18cd182b89559d54ba4ebcfc1b3645f109d2aa1d4a08ba9"} Nov 24 19:00:14 crc kubenswrapper[4768]: I1124 19:00:14.057234 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sgv9" event={"ID":"871f450b-7842-4f3c-bb84-7441248278c3","Type":"ContainerStarted","Data":"8a70b16798735d3da1721b2f39bd37b71855ad83a383c7f2c3e0a8b5f1f8bb89"} Nov 24 19:00:15 crc kubenswrapper[4768]: I1124 19:00:15.075832 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpbkl" event={"ID":"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f","Type":"ContainerStarted","Data":"d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190"} Nov 24 19:00:15 crc kubenswrapper[4768]: I1124 19:00:15.110258 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hpbkl" podStartSLOduration=2.5224271050000002 podStartE2EDuration="5.110227305s" podCreationTimestamp="2025-11-24 19:00:10 +0000 UTC" firstStartedPulling="2025-11-24 19:00:12.010234882 +0000 UTC m=+4250.870816669" lastFinishedPulling="2025-11-24 19:00:14.598035092 +0000 UTC m=+4253.458616869" observedRunningTime="2025-11-24 19:00:15.108165891 +0000 UTC m=+4253.968747698" watchObservedRunningTime="2025-11-24 19:00:15.110227305 +0000 UTC m=+4253.970809082" Nov 24 19:00:16 crc kubenswrapper[4768]: I1124 19:00:16.086680 4768 generic.go:334] "Generic (PLEG): container finished" podID="871f450b-7842-4f3c-bb84-7441248278c3" containerID="8a70b16798735d3da1721b2f39bd37b71855ad83a383c7f2c3e0a8b5f1f8bb89" exitCode=0 Nov 24 19:00:16 crc kubenswrapper[4768]: I1124 19:00:16.086769 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sgv9" event={"ID":"871f450b-7842-4f3c-bb84-7441248278c3","Type":"ContainerDied","Data":"8a70b16798735d3da1721b2f39bd37b71855ad83a383c7f2c3e0a8b5f1f8bb89"} Nov 24 19:00:17 crc kubenswrapper[4768]: I1124 19:00:17.101542 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sgv9" event={"ID":"871f450b-7842-4f3c-bb84-7441248278c3","Type":"ContainerStarted","Data":"2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617"} Nov 24 19:00:17 crc kubenswrapper[4768]: I1124 19:00:17.119925 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5sgv9" podStartSLOduration=2.376477131 podStartE2EDuration="7.11990739s" podCreationTimestamp="2025-11-24 19:00:10 +0000 UTC" firstStartedPulling="2025-11-24 19:00:12.012253926 +0000 UTC m=+4250.872835703" lastFinishedPulling="2025-11-24 19:00:16.755684185 +0000 UTC m=+4255.616265962" observedRunningTime="2025-11-24 19:00:17.117190926 +0000 UTC m=+4255.977772723" watchObservedRunningTime="2025-11-24 19:00:17.11990739 +0000 UTC m=+4255.980489177" Nov 24 19:00:20 crc kubenswrapper[4768]: I1124 19:00:20.636246 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:20 crc kubenswrapper[4768]: I1124 19:00:20.636866 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:20 crc kubenswrapper[4768]: I1124 19:00:20.677973 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:21 crc kubenswrapper[4768]: I1124 19:00:21.192001 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:21 crc kubenswrapper[4768]: I1124 19:00:21.211978 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:21 crc kubenswrapper[4768]: I1124 19:00:21.212042 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:21 crc kubenswrapper[4768]: I1124 19:00:21.253153 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hpbkl"] Nov 24 19:00:22 crc kubenswrapper[4768]: I1124 19:00:22.280001 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5sgv9" podUID="871f450b-7842-4f3c-bb84-7441248278c3" containerName="registry-server" probeResult="failure" output=< Nov 24 19:00:22 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Nov 24 19:00:22 crc kubenswrapper[4768]: > Nov 24 19:00:23 crc kubenswrapper[4768]: I1124 19:00:23.158797 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hpbkl" podUID="70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" containerName="registry-server" containerID="cri-o://d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190" gracePeriod=2 Nov 24 19:00:23 crc kubenswrapper[4768]: I1124 19:00:23.685779 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:23 crc kubenswrapper[4768]: I1124 19:00:23.807619 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-utilities\") pod \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\" (UID: \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\") " Nov 24 19:00:23 crc kubenswrapper[4768]: I1124 19:00:23.807701 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49d6n\" (UniqueName: \"kubernetes.io/projected/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-kube-api-access-49d6n\") pod \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\" (UID: \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\") " Nov 24 19:00:23 crc kubenswrapper[4768]: I1124 19:00:23.807822 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-catalog-content\") pod \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\" (UID: \"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f\") " Nov 24 19:00:23 crc kubenswrapper[4768]: I1124 19:00:23.809390 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-utilities" (OuterVolumeSpecName: "utilities") pod "70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" (UID: "70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:00:23 crc kubenswrapper[4768]: I1124 19:00:23.817251 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-kube-api-access-49d6n" (OuterVolumeSpecName: "kube-api-access-49d6n") pod "70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" (UID: "70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f"). InnerVolumeSpecName "kube-api-access-49d6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:00:23 crc kubenswrapper[4768]: I1124 19:00:23.854642 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" (UID: "70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:00:23 crc kubenswrapper[4768]: I1124 19:00:23.910344 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:23 crc kubenswrapper[4768]: I1124 19:00:23.910758 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49d6n\" (UniqueName: \"kubernetes.io/projected/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-kube-api-access-49d6n\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:23 crc kubenswrapper[4768]: I1124 19:00:23.910940 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.174452 4768 generic.go:334] "Generic (PLEG): container finished" podID="70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" containerID="d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190" exitCode=0 Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.174543 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpbkl" event={"ID":"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f","Type":"ContainerDied","Data":"d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190"} Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.174594 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpbkl" event={"ID":"70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f","Type":"ContainerDied","Data":"5c1d099787ed6bae9c303a49a9c9fdcbd8c4632773ad10942999b85102687bf8"} Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.174648 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hpbkl" Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.174645 4768 scope.go:117] "RemoveContainer" containerID="d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190" Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.214586 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hpbkl"] Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.215452 4768 scope.go:117] "RemoveContainer" containerID="178656c0be878279b18cd182b89559d54ba4ebcfc1b3645f109d2aa1d4a08ba9" Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.229014 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hpbkl"] Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.241798 4768 scope.go:117] "RemoveContainer" containerID="9f62f2aea009a6ec14587309f94eb9976a90e650a89456dbaa15a18904c10157" Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.316566 4768 scope.go:117] "RemoveContainer" containerID="d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190" Nov 24 19:00:24 crc kubenswrapper[4768]: E1124 19:00:24.317563 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190\": container with ID starting with d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190 not found: ID does not exist" containerID="d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190" Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.317605 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190"} err="failed to get container status \"d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190\": rpc error: code = NotFound desc = could not find container \"d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190\": container with ID starting with d26f40f49b60d4db6ec2c6812d89d07fc0c4f58576b9314181629a5ecee0a190 not found: ID does not exist" Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.317630 4768 scope.go:117] "RemoveContainer" containerID="178656c0be878279b18cd182b89559d54ba4ebcfc1b3645f109d2aa1d4a08ba9" Nov 24 19:00:24 crc kubenswrapper[4768]: E1124 19:00:24.318193 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"178656c0be878279b18cd182b89559d54ba4ebcfc1b3645f109d2aa1d4a08ba9\": container with ID starting with 178656c0be878279b18cd182b89559d54ba4ebcfc1b3645f109d2aa1d4a08ba9 not found: ID does not exist" containerID="178656c0be878279b18cd182b89559d54ba4ebcfc1b3645f109d2aa1d4a08ba9" Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.318214 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178656c0be878279b18cd182b89559d54ba4ebcfc1b3645f109d2aa1d4a08ba9"} err="failed to get container status \"178656c0be878279b18cd182b89559d54ba4ebcfc1b3645f109d2aa1d4a08ba9\": rpc error: code = NotFound desc = could not find container \"178656c0be878279b18cd182b89559d54ba4ebcfc1b3645f109d2aa1d4a08ba9\": container with ID starting with 178656c0be878279b18cd182b89559d54ba4ebcfc1b3645f109d2aa1d4a08ba9 not found: ID does not exist" Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.318231 4768 scope.go:117] "RemoveContainer" containerID="9f62f2aea009a6ec14587309f94eb9976a90e650a89456dbaa15a18904c10157" Nov 24 19:00:24 crc kubenswrapper[4768]: E1124 19:00:24.319362 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f62f2aea009a6ec14587309f94eb9976a90e650a89456dbaa15a18904c10157\": container with ID starting with 9f62f2aea009a6ec14587309f94eb9976a90e650a89456dbaa15a18904c10157 not found: ID does not exist" containerID="9f62f2aea009a6ec14587309f94eb9976a90e650a89456dbaa15a18904c10157" Nov 24 19:00:24 crc kubenswrapper[4768]: I1124 19:00:24.319400 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f62f2aea009a6ec14587309f94eb9976a90e650a89456dbaa15a18904c10157"} err="failed to get container status \"9f62f2aea009a6ec14587309f94eb9976a90e650a89456dbaa15a18904c10157\": rpc error: code = NotFound desc = could not find container \"9f62f2aea009a6ec14587309f94eb9976a90e650a89456dbaa15a18904c10157\": container with ID starting with 9f62f2aea009a6ec14587309f94eb9976a90e650a89456dbaa15a18904c10157 not found: ID does not exist" Nov 24 19:00:25 crc kubenswrapper[4768]: I1124 19:00:25.918928 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" path="/var/lib/kubelet/pods/70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f/volumes" Nov 24 19:00:31 crc kubenswrapper[4768]: I1124 19:00:31.299576 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:31 crc kubenswrapper[4768]: I1124 19:00:31.379927 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:31 crc kubenswrapper[4768]: I1124 19:00:31.545699 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5sgv9"] Nov 24 19:00:33 crc kubenswrapper[4768]: I1124 19:00:33.264474 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5sgv9" podUID="871f450b-7842-4f3c-bb84-7441248278c3" containerName="registry-server" containerID="cri-o://2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617" gracePeriod=2 Nov 24 19:00:33 crc kubenswrapper[4768]: I1124 19:00:33.801627 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:33 crc kubenswrapper[4768]: I1124 19:00:33.844975 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871f450b-7842-4f3c-bb84-7441248278c3-utilities\") pod \"871f450b-7842-4f3c-bb84-7441248278c3\" (UID: \"871f450b-7842-4f3c-bb84-7441248278c3\") " Nov 24 19:00:33 crc kubenswrapper[4768]: I1124 19:00:33.845115 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871f450b-7842-4f3c-bb84-7441248278c3-catalog-content\") pod \"871f450b-7842-4f3c-bb84-7441248278c3\" (UID: \"871f450b-7842-4f3c-bb84-7441248278c3\") " Nov 24 19:00:33 crc kubenswrapper[4768]: I1124 19:00:33.845434 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9gdt\" (UniqueName: \"kubernetes.io/projected/871f450b-7842-4f3c-bb84-7441248278c3-kube-api-access-w9gdt\") pod \"871f450b-7842-4f3c-bb84-7441248278c3\" (UID: \"871f450b-7842-4f3c-bb84-7441248278c3\") " Nov 24 19:00:33 crc kubenswrapper[4768]: I1124 19:00:33.849776 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/871f450b-7842-4f3c-bb84-7441248278c3-utilities" (OuterVolumeSpecName: "utilities") pod "871f450b-7842-4f3c-bb84-7441248278c3" (UID: "871f450b-7842-4f3c-bb84-7441248278c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:00:33 crc kubenswrapper[4768]: I1124 19:00:33.862618 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871f450b-7842-4f3c-bb84-7441248278c3-kube-api-access-w9gdt" (OuterVolumeSpecName: "kube-api-access-w9gdt") pod "871f450b-7842-4f3c-bb84-7441248278c3" (UID: "871f450b-7842-4f3c-bb84-7441248278c3"). InnerVolumeSpecName "kube-api-access-w9gdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:00:33 crc kubenswrapper[4768]: I1124 19:00:33.946390 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/871f450b-7842-4f3c-bb84-7441248278c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "871f450b-7842-4f3c-bb84-7441248278c3" (UID: "871f450b-7842-4f3c-bb84-7441248278c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:00:33 crc kubenswrapper[4768]: I1124 19:00:33.947750 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9gdt\" (UniqueName: \"kubernetes.io/projected/871f450b-7842-4f3c-bb84-7441248278c3-kube-api-access-w9gdt\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:33 crc kubenswrapper[4768]: I1124 19:00:33.947769 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871f450b-7842-4f3c-bb84-7441248278c3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:33 crc kubenswrapper[4768]: I1124 19:00:33.947777 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871f450b-7842-4f3c-bb84-7441248278c3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.280436 4768 generic.go:334] "Generic (PLEG): container finished" podID="871f450b-7842-4f3c-bb84-7441248278c3" containerID="2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617" exitCode=0 Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.280529 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sgv9" event={"ID":"871f450b-7842-4f3c-bb84-7441248278c3","Type":"ContainerDied","Data":"2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617"} Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.280609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5sgv9" event={"ID":"871f450b-7842-4f3c-bb84-7441248278c3","Type":"ContainerDied","Data":"47ab1d92a690f4765a97b0ff4201ed96548b40a6051827fecfdbfe9c946e81ff"} Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.280641 4768 scope.go:117] "RemoveContainer" containerID="2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617" Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.280568 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5sgv9" Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.302860 4768 scope.go:117] "RemoveContainer" containerID="8a70b16798735d3da1721b2f39bd37b71855ad83a383c7f2c3e0a8b5f1f8bb89" Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.317272 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5sgv9"] Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.331855 4768 scope.go:117] "RemoveContainer" containerID="b633768dcce1cb1cd8a300fe00a432e9c8af85c9d96c3900a38b1b72fd61cca4" Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.333052 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5sgv9"] Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.371289 4768 scope.go:117] "RemoveContainer" containerID="2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617" Nov 24 19:00:34 crc kubenswrapper[4768]: E1124 19:00:34.372221 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617\": container with ID starting with 2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617 not found: ID does not exist" containerID="2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617" Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.372279 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617"} err="failed to get container status \"2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617\": rpc error: code = NotFound desc = could not find container \"2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617\": container with ID starting with 2c8f7b32b6b2d23ed2f0dcf41afe567900f84c4d8cb61091939233dde0e8c617 not found: ID does not exist" Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.372305 4768 scope.go:117] "RemoveContainer" containerID="8a70b16798735d3da1721b2f39bd37b71855ad83a383c7f2c3e0a8b5f1f8bb89" Nov 24 19:00:34 crc kubenswrapper[4768]: E1124 19:00:34.372914 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a70b16798735d3da1721b2f39bd37b71855ad83a383c7f2c3e0a8b5f1f8bb89\": container with ID starting with 8a70b16798735d3da1721b2f39bd37b71855ad83a383c7f2c3e0a8b5f1f8bb89 not found: ID does not exist" containerID="8a70b16798735d3da1721b2f39bd37b71855ad83a383c7f2c3e0a8b5f1f8bb89" Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.372995 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a70b16798735d3da1721b2f39bd37b71855ad83a383c7f2c3e0a8b5f1f8bb89"} err="failed to get container status \"8a70b16798735d3da1721b2f39bd37b71855ad83a383c7f2c3e0a8b5f1f8bb89\": rpc error: code = NotFound desc = could not find container \"8a70b16798735d3da1721b2f39bd37b71855ad83a383c7f2c3e0a8b5f1f8bb89\": container with ID starting with 8a70b16798735d3da1721b2f39bd37b71855ad83a383c7f2c3e0a8b5f1f8bb89 not found: ID does not exist" Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.373061 4768 scope.go:117] "RemoveContainer" containerID="b633768dcce1cb1cd8a300fe00a432e9c8af85c9d96c3900a38b1b72fd61cca4" Nov 24 19:00:34 crc kubenswrapper[4768]: E1124 19:00:34.374278 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b633768dcce1cb1cd8a300fe00a432e9c8af85c9d96c3900a38b1b72fd61cca4\": container with ID starting with b633768dcce1cb1cd8a300fe00a432e9c8af85c9d96c3900a38b1b72fd61cca4 not found: ID does not exist" containerID="b633768dcce1cb1cd8a300fe00a432e9c8af85c9d96c3900a38b1b72fd61cca4" Nov 24 19:00:34 crc kubenswrapper[4768]: I1124 19:00:34.374361 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b633768dcce1cb1cd8a300fe00a432e9c8af85c9d96c3900a38b1b72fd61cca4"} err="failed to get container status \"b633768dcce1cb1cd8a300fe00a432e9c8af85c9d96c3900a38b1b72fd61cca4\": rpc error: code = NotFound desc = could not find container \"b633768dcce1cb1cd8a300fe00a432e9c8af85c9d96c3900a38b1b72fd61cca4\": container with ID starting with b633768dcce1cb1cd8a300fe00a432e9c8af85c9d96c3900a38b1b72fd61cca4 not found: ID does not exist" Nov 24 19:00:35 crc kubenswrapper[4768]: I1124 19:00:35.917078 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871f450b-7842-4f3c-bb84-7441248278c3" path="/var/lib/kubelet/pods/871f450b-7842-4f3c-bb84-7441248278c3/volumes" Nov 24 19:00:59 crc kubenswrapper[4768]: I1124 19:00:59.282974 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7cbf4cbf68-zhhj4_22661dfe-b7e1-4894-ae13-dab13e09c845/barbican-api/0.log" Nov 24 19:00:59 crc kubenswrapper[4768]: I1124 19:00:59.355961 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7cbf4cbf68-zhhj4_22661dfe-b7e1-4894-ae13-dab13e09c845/barbican-api-log/0.log" Nov 24 19:00:59 crc kubenswrapper[4768]: I1124 19:00:59.435687 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-97698dcdb-54zqg_5cb6b015-ae5e-438f-9aec-c25982a2febc/barbican-keystone-listener/0.log" Nov 24 19:00:59 crc kubenswrapper[4768]: I1124 19:00:59.500689 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-97698dcdb-54zqg_5cb6b015-ae5e-438f-9aec-c25982a2febc/barbican-keystone-listener-log/0.log" Nov 24 19:00:59 crc kubenswrapper[4768]: I1124 19:00:59.618739 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b7d468cdf-9fjfm_b343e1cc-a6b5-4074-98b3-a4bddb9b2730/barbican-worker/0.log" Nov 24 19:00:59 crc kubenswrapper[4768]: I1124 19:00:59.653885 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b7d468cdf-9fjfm_b343e1cc-a6b5-4074-98b3-a4bddb9b2730/barbican-worker-log/0.log" Nov 24 19:00:59 crc kubenswrapper[4768]: I1124 19:00:59.774129 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-f6r8b_0d74256a-a4fc-4ecf-a57c-09aa5686878b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:00:59 crc kubenswrapper[4768]: I1124 19:00:59.881770 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-fprbx_03a4429e-4032-4d71-adc7-7257ac152323/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:00:59 crc kubenswrapper[4768]: I1124 19:00:59.992643 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-q24g2_0938fce9-58c6-4933-aeb3-49e2fe28bf0f/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.103906 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xrt6d_ca59c4d5-5455-49a2-885e-d6e8eb3103fd/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.161556 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29400181-5dd2t"] Nov 24 19:01:00 crc kubenswrapper[4768]: E1124 19:01:00.161982 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871f450b-7842-4f3c-bb84-7441248278c3" containerName="extract-utilities" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.162000 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="871f450b-7842-4f3c-bb84-7441248278c3" containerName="extract-utilities" Nov 24 19:01:00 crc kubenswrapper[4768]: E1124 19:01:00.162015 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" containerName="extract-utilities" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.162022 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" containerName="extract-utilities" Nov 24 19:01:00 crc kubenswrapper[4768]: E1124 19:01:00.162037 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" containerName="extract-content" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.162042 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" containerName="extract-content" Nov 24 19:01:00 crc kubenswrapper[4768]: E1124 19:01:00.162054 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" containerName="registry-server" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.162060 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" containerName="registry-server" Nov 24 19:01:00 crc kubenswrapper[4768]: E1124 19:01:00.162076 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871f450b-7842-4f3c-bb84-7441248278c3" containerName="registry-server" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.162082 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="871f450b-7842-4f3c-bb84-7441248278c3" containerName="registry-server" Nov 24 19:01:00 crc kubenswrapper[4768]: E1124 19:01:00.162118 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871f450b-7842-4f3c-bb84-7441248278c3" containerName="extract-content" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.162125 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="871f450b-7842-4f3c-bb84-7441248278c3" containerName="extract-content" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.162306 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="871f450b-7842-4f3c-bb84-7441248278c3" containerName="registry-server" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.162326 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="70a18fd1-762d-42f6-9d65-6ebbc5cfdd2f" containerName="registry-server" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.163025 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.174677 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400181-5dd2t"] Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.320787 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-combined-ca-bundle\") pod \"keystone-cron-29400181-5dd2t\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.321012 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj6lz\" (UniqueName: \"kubernetes.io/projected/2efc3794-f03f-469c-9882-bad25688c861-kube-api-access-mj6lz\") pod \"keystone-cron-29400181-5dd2t\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.321063 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-fernet-keys\") pod \"keystone-cron-29400181-5dd2t\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.321093 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-config-data\") pod \"keystone-cron-29400181-5dd2t\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.423694 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-combined-ca-bundle\") pod \"keystone-cron-29400181-5dd2t\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.423857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj6lz\" (UniqueName: \"kubernetes.io/projected/2efc3794-f03f-469c-9882-bad25688c861-kube-api-access-mj6lz\") pod \"keystone-cron-29400181-5dd2t\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.423892 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-fernet-keys\") pod \"keystone-cron-29400181-5dd2t\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.423917 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-config-data\") pod \"keystone-cron-29400181-5dd2t\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.491570 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-config-data\") pod \"keystone-cron-29400181-5dd2t\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.500590 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-fernet-keys\") pod \"keystone-cron-29400181-5dd2t\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.500600 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj6lz\" (UniqueName: \"kubernetes.io/projected/2efc3794-f03f-469c-9882-bad25688c861-kube-api-access-mj6lz\") pod \"keystone-cron-29400181-5dd2t\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.507242 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-combined-ca-bundle\") pod \"keystone-cron-29400181-5dd2t\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.694114 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_81427e5e-c0e8-4445-8a60-2b5dcdcf9a52/ceilometer-notification-agent/0.log" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.696834 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_81427e5e-c0e8-4445-8a60-2b5dcdcf9a52/ceilometer-central-agent/0.log" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.720068 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_81427e5e-c0e8-4445-8a60-2b5dcdcf9a52/sg-core/0.log" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.731431 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_81427e5e-c0e8-4445-8a60-2b5dcdcf9a52/proxy-httpd/0.log" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.789096 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.952194 4768 scope.go:117] "RemoveContainer" containerID="4a951ae6d3f9f94bb72a0b96bdfd175f6450c2a81fc1bc5cf49313457506bfcf" Nov 24 19:01:00 crc kubenswrapper[4768]: I1124 19:01:00.968694 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-flfz8_d974ce0f-88e9-465d-9c74-6a7531593c4b/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.000064 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcxdf_2d65345f-930f-4b71-9968-a613d7c11a33/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.270576 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e6cd1c8b-47af-4035-9e6f-601dd5b94cd3/cinder-api-log/0.log" Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.288339 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e6cd1c8b-47af-4035-9e6f-601dd5b94cd3/cinder-api/0.log" Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.301066 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400181-5dd2t"] Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.460460 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9d187717-3b2d-42c1-9daa-6db0b5d2c14c/probe/0.log" Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.581314 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400181-5dd2t" event={"ID":"2efc3794-f03f-469c-9882-bad25688c861","Type":"ContainerStarted","Data":"5018e2cae6e3cf36ee7996180230627568e603e2d80aaa33b3ef111138e20806"} Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.583029 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400181-5dd2t" event={"ID":"2efc3794-f03f-469c-9882-bad25688c861","Type":"ContainerStarted","Data":"2ff141f791d56c9502de28e1696f1050df779ca68ab95a5868eb938ca51343cc"} Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.590229 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_40369462-11a9-45f0-ad9b-cec7971e9414/cinder-scheduler/0.log" Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.604052 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29400181-5dd2t" podStartSLOduration=1.604026861 podStartE2EDuration="1.604026861s" podCreationTimestamp="2025-11-24 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:01:01.597785134 +0000 UTC m=+4300.458366911" watchObservedRunningTime="2025-11-24 19:01:01.604026861 +0000 UTC m=+4300.464608638" Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.722619 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_40369462-11a9-45f0-ad9b-cec7971e9414/probe/0.log" Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.822514 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9d187717-3b2d-42c1-9daa-6db0b5d2c14c/cinder-backup/0.log" Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.968379 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_97567296-4a8c-4270-96b4-83eaabf8194b/cinder-volume/0.log" Nov 24 19:01:01 crc kubenswrapper[4768]: I1124 19:01:01.986252 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_97567296-4a8c-4270-96b4-83eaabf8194b/probe/0.log" Nov 24 19:01:02 crc kubenswrapper[4768]: I1124 19:01:02.066724 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-tz7j7_1dd3638b-dad5-4d28-8451-1ef9cbe46251/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:02 crc kubenswrapper[4768]: I1124 19:01:02.169979 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6q4sz_cdd7e3c1-531f-4b9b-99bb-057c5078cf95/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:02 crc kubenswrapper[4768]: I1124 19:01:02.834923 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-l8v2f_841499fa-7a48-465c-891c-13987e5064d5/init/0.log" Nov 24 19:01:03 crc kubenswrapper[4768]: I1124 19:01:03.065105 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-l8v2f_841499fa-7a48-465c-891c-13987e5064d5/init/0.log" Nov 24 19:01:03 crc kubenswrapper[4768]: I1124 19:01:03.147612 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9/glance-httpd/0.log" Nov 24 19:01:03 crc kubenswrapper[4768]: I1124 19:01:03.151790 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-l8v2f_841499fa-7a48-465c-891c-13987e5064d5/dnsmasq-dns/0.log" Nov 24 19:01:03 crc kubenswrapper[4768]: I1124 19:01:03.156611 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5dfeeb13-ec6b-432a-9aa4-d3a0ee4d61c9/glance-log/0.log" Nov 24 19:01:03 crc kubenswrapper[4768]: I1124 19:01:03.329447 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c7d82efd-27b9-4b06-a476-230d3dbbb176/glance-log/0.log" Nov 24 19:01:03 crc kubenswrapper[4768]: I1124 19:01:03.352877 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c7d82efd-27b9-4b06-a476-230d3dbbb176/glance-httpd/0.log" Nov 24 19:01:03 crc kubenswrapper[4768]: I1124 19:01:03.509510 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-85f468447b-zhvc8_cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274/horizon/0.log" Nov 24 19:01:03 crc kubenswrapper[4768]: I1124 19:01:03.603691 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kvdqk_f5889b94-1134-4803-88de-f82ae87f5720/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:03 crc kubenswrapper[4768]: I1124 19:01:03.711518 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-85f468447b-zhvc8_cc7f58c7-70ae-49a4-b2d5-1ea9fcce4274/horizon-log/0.log" Nov 24 19:01:03 crc kubenswrapper[4768]: I1124 19:01:03.741987 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8th9m_afb6ccb1-e75a-470b-9755-a3359c7d23fd/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:03 crc kubenswrapper[4768]: I1124 19:01:03.984750 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_20d3ec89-0004-4ed5-ae4b-c9dcf85a3151/kube-state-metrics/0.log" Nov 24 19:01:03 crc kubenswrapper[4768]: I1124 19:01:03.998542 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-56748c45b5-4df84_434c7b39-9f1a-4032-b6fb-41c315a3a521/keystone-api/0.log" Nov 24 19:01:04 crc kubenswrapper[4768]: I1124 19:01:04.167016 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5gwbk_ad4a499f-9065-421e-9c19-6b6ae06f255e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:04 crc kubenswrapper[4768]: I1124 19:01:04.267839 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f30f2c98-4600-4324-b983-59a519225520/manila-api-log/0.log" Nov 24 19:01:04 crc kubenswrapper[4768]: I1124 19:01:04.437726 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f30f2c98-4600-4324-b983-59a519225520/manila-api/0.log" Nov 24 19:01:04 crc kubenswrapper[4768]: I1124 19:01:04.459206 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_685b1427-a20b-4fb0-a6c9-42ec98f11d67/probe/0.log" Nov 24 19:01:04 crc kubenswrapper[4768]: I1124 19:01:04.546969 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_685b1427-a20b-4fb0-a6c9-42ec98f11d67/manila-scheduler/0.log" Nov 24 19:01:04 crc kubenswrapper[4768]: I1124 19:01:04.621376 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_59a6e210-36bf-431b-a1b4-3784ec202cde/probe/0.log" Nov 24 19:01:04 crc kubenswrapper[4768]: I1124 19:01:04.685706 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_59a6e210-36bf-431b-a1b4-3784ec202cde/manila-share/0.log" Nov 24 19:01:04 crc kubenswrapper[4768]: I1124 19:01:04.910381 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-844dbf79df-5t2np_6f9024a7-971e-460c-8b41-157dc2403a44/neutron-httpd/0.log" Nov 24 19:01:04 crc kubenswrapper[4768]: I1124 19:01:04.928779 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-844dbf79df-5t2np_6f9024a7-971e-460c-8b41-157dc2403a44/neutron-api/0.log" Nov 24 19:01:05 crc kubenswrapper[4768]: I1124 19:01:05.095351 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ck4cz_edac5bf5-aa67-431e-9e1a-3551d9323772/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:05 crc kubenswrapper[4768]: I1124 19:01:05.399735 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_09017e2b-873f-446e-9d2c-8dcdddb26732/nova-api-log/0.log" Nov 24 19:01:05 crc kubenswrapper[4768]: I1124 19:01:05.559115 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ae1cfe70-c0e5-4191-8605-c57257bfef1f/nova-cell0-conductor-conductor/0.log" Nov 24 19:01:05 crc kubenswrapper[4768]: I1124 19:01:05.618269 4768 generic.go:334] "Generic (PLEG): container finished" podID="2efc3794-f03f-469c-9882-bad25688c861" containerID="5018e2cae6e3cf36ee7996180230627568e603e2d80aaa33b3ef111138e20806" exitCode=0 Nov 24 19:01:05 crc kubenswrapper[4768]: I1124 19:01:05.618311 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400181-5dd2t" event={"ID":"2efc3794-f03f-469c-9882-bad25688c861","Type":"ContainerDied","Data":"5018e2cae6e3cf36ee7996180230627568e603e2d80aaa33b3ef111138e20806"} Nov 24 19:01:05 crc kubenswrapper[4768]: I1124 19:01:05.761208 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_09017e2b-873f-446e-9d2c-8dcdddb26732/nova-api-api/0.log" Nov 24 19:01:05 crc kubenswrapper[4768]: I1124 19:01:05.762763 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_9883b617-fef7-4b4e-9856-e7075ba94d9e/nova-cell1-conductor-conductor/0.log" Nov 24 19:01:05 crc kubenswrapper[4768]: I1124 19:01:05.860024 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_2f5e8953-6f74-4185-8020-585c1fc3d9f1/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 19:01:06 crc kubenswrapper[4768]: I1124 19:01:06.001153 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xf7hz_fd99c2dc-4b0c-49e8-bc2e-59a8ad923066/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:06 crc kubenswrapper[4768]: I1124 19:01:06.178449 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9e26f8aa-16b5-445c-9568-4e56b3665004/nova-metadata-log/0.log" Nov 24 19:01:06 crc kubenswrapper[4768]: I1124 19:01:06.480041 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_758c992e-f62f-4efd-af1d-0c1279d68544/mysql-bootstrap/0.log" Nov 24 19:01:06 crc kubenswrapper[4768]: I1124 19:01:06.487798 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ba0653c2-07ff-4e12-a6ab-d1f1f81a5344/nova-scheduler-scheduler/0.log" Nov 24 19:01:06 crc kubenswrapper[4768]: I1124 19:01:06.692848 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_758c992e-f62f-4efd-af1d-0c1279d68544/mysql-bootstrap/0.log" Nov 24 19:01:06 crc kubenswrapper[4768]: I1124 19:01:06.699148 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_758c992e-f62f-4efd-af1d-0c1279d68544/galera/0.log" Nov 24 19:01:06 crc kubenswrapper[4768]: I1124 19:01:06.933765 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8145b894-fd09-47c1-b9c2-0cb4cfa6d293/mysql-bootstrap/0.log" Nov 24 19:01:06 crc kubenswrapper[4768]: I1124 19:01:06.993746 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.053236 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj6lz\" (UniqueName: \"kubernetes.io/projected/2efc3794-f03f-469c-9882-bad25688c861-kube-api-access-mj6lz\") pod \"2efc3794-f03f-469c-9882-bad25688c861\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.053395 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-config-data\") pod \"2efc3794-f03f-469c-9882-bad25688c861\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.053591 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-combined-ca-bundle\") pod \"2efc3794-f03f-469c-9882-bad25688c861\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.053629 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-fernet-keys\") pod \"2efc3794-f03f-469c-9882-bad25688c861\" (UID: \"2efc3794-f03f-469c-9882-bad25688c861\") " Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.059282 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2efc3794-f03f-469c-9882-bad25688c861" (UID: "2efc3794-f03f-469c-9882-bad25688c861"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.070441 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2efc3794-f03f-469c-9882-bad25688c861-kube-api-access-mj6lz" (OuterVolumeSpecName: "kube-api-access-mj6lz") pod "2efc3794-f03f-469c-9882-bad25688c861" (UID: "2efc3794-f03f-469c-9882-bad25688c861"). InnerVolumeSpecName "kube-api-access-mj6lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.149327 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2efc3794-f03f-469c-9882-bad25688c861" (UID: "2efc3794-f03f-469c-9882-bad25688c861"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.155948 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.155982 4768 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.155992 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj6lz\" (UniqueName: \"kubernetes.io/projected/2efc3794-f03f-469c-9882-bad25688c861-kube-api-access-mj6lz\") on node \"crc\" DevicePath \"\"" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.211716 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-config-data" (OuterVolumeSpecName: "config-data") pod "2efc3794-f03f-469c-9882-bad25688c861" (UID: "2efc3794-f03f-469c-9882-bad25688c861"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.257280 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efc3794-f03f-469c-9882-bad25688c861-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.444656 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8145b894-fd09-47c1-b9c2-0cb4cfa6d293/mysql-bootstrap/0.log" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.465080 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8145b894-fd09-47c1-b9c2-0cb4cfa6d293/galera/0.log" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.635472 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400181-5dd2t" event={"ID":"2efc3794-f03f-469c-9882-bad25688c861","Type":"ContainerDied","Data":"2ff141f791d56c9502de28e1696f1050df779ca68ab95a5868eb938ca51343cc"} Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.635539 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400181-5dd2t" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.635543 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ff141f791d56c9502de28e1696f1050df779ca68ab95a5868eb938ca51343cc" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.699232 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_e5ca5655-0b68-4c97-984f-2085144d98dc/openstackclient/0.log" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.772224 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-f9558_2beabb7a-c951-4e24-8a6e-83ceb0ebb087/openstack-network-exporter/0.log" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.932511 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xb8qp_509a2a18-bedf-4f92-bc91-608b5af92c1e/ovsdb-server-init/0.log" Nov 24 19:01:07 crc kubenswrapper[4768]: I1124 19:01:07.980635 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9e26f8aa-16b5-445c-9568-4e56b3665004/nova-metadata-metadata/0.log" Nov 24 19:01:08 crc kubenswrapper[4768]: I1124 19:01:08.146997 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xb8qp_509a2a18-bedf-4f92-bc91-608b5af92c1e/ovsdb-server-init/0.log" Nov 24 19:01:08 crc kubenswrapper[4768]: I1124 19:01:08.228466 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xb8qp_509a2a18-bedf-4f92-bc91-608b5af92c1e/ovs-vswitchd/0.log" Nov 24 19:01:08 crc kubenswrapper[4768]: I1124 19:01:08.260239 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xb8qp_509a2a18-bedf-4f92-bc91-608b5af92c1e/ovsdb-server/0.log" Nov 24 19:01:08 crc kubenswrapper[4768]: I1124 19:01:08.371674 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zlg8p_710c430d-b973-47b9-9917-2db7864f7570/ovn-controller/0.log" Nov 24 19:01:08 crc kubenswrapper[4768]: I1124 19:01:08.541846 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-zcf2w_fd87ee72-91d9-40a2-a95f-f4358b524d8f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:08 crc kubenswrapper[4768]: I1124 19:01:08.596345 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_09191ff5-4686-4243-a0b4-3dd710ead568/openstack-network-exporter/0.log" Nov 24 19:01:08 crc kubenswrapper[4768]: I1124 19:01:08.652463 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_09191ff5-4686-4243-a0b4-3dd710ead568/ovn-northd/0.log" Nov 24 19:01:08 crc kubenswrapper[4768]: I1124 19:01:08.853531 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c15f153b-967a-4edd-8c49-fd474a1d5de3/openstack-network-exporter/0.log" Nov 24 19:01:08 crc kubenswrapper[4768]: I1124 19:01:08.888973 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c15f153b-967a-4edd-8c49-fd474a1d5de3/ovsdbserver-nb/0.log" Nov 24 19:01:09 crc kubenswrapper[4768]: I1124 19:01:09.069630 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4b5d5ef6-f6b9-4930-8426-a0718b3a754f/openstack-network-exporter/0.log" Nov 24 19:01:09 crc kubenswrapper[4768]: I1124 19:01:09.160934 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4b5d5ef6-f6b9-4930-8426-a0718b3a754f/ovsdbserver-sb/0.log" Nov 24 19:01:09 crc kubenswrapper[4768]: I1124 19:01:09.212644 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-76b54949f4-59kjn_43c2665c-ef67-4325-bad9-7e42cf3195bd/placement-api/0.log" Nov 24 19:01:09 crc kubenswrapper[4768]: I1124 19:01:09.214613 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_40180404-c438-415c-8787-05a1cc8461d0/memcached/0.log" Nov 24 19:01:09 crc kubenswrapper[4768]: I1124 19:01:09.364823 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-76b54949f4-59kjn_43c2665c-ef67-4325-bad9-7e42cf3195bd/placement-log/0.log" Nov 24 19:01:09 crc kubenswrapper[4768]: I1124 19:01:09.396469 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f61bf1e8-52b3-4777-ad9b-52c8a1cad06c/setup-container/0.log" Nov 24 19:01:09 crc kubenswrapper[4768]: I1124 19:01:09.562504 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f61bf1e8-52b3-4777-ad9b-52c8a1cad06c/setup-container/0.log" Nov 24 19:01:09 crc kubenswrapper[4768]: I1124 19:01:09.565397 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f61bf1e8-52b3-4777-ad9b-52c8a1cad06c/rabbitmq/0.log" Nov 24 19:01:09 crc kubenswrapper[4768]: I1124 19:01:09.617950 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2d3ded99-92ff-43cc-83de-6042d6c83acf/setup-container/0.log" Nov 24 19:01:09 crc kubenswrapper[4768]: I1124 19:01:09.815704 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2d3ded99-92ff-43cc-83de-6042d6c83acf/setup-container/0.log" Nov 24 19:01:09 crc kubenswrapper[4768]: I1124 19:01:09.825734 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vp8lg_474b1f4d-271b-4abb-bad4-fef9d86fff99/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:09 crc kubenswrapper[4768]: I1124 19:01:09.839168 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2d3ded99-92ff-43cc-83de-6042d6c83acf/rabbitmq/0.log" Nov 24 19:01:10 crc kubenswrapper[4768]: I1124 19:01:10.463560 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gdnxb_621b6bcf-7a5c-4a85-9a8f-379e95bad6ac/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:10 crc kubenswrapper[4768]: I1124 19:01:10.489462 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qp54b_e2f4a9fd-b80f-44d1-80b8-298119d3b967/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:10 crc kubenswrapper[4768]: I1124 19:01:10.528635 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-l4ldg_e2c2b5cc-b203-4e5b-be7c-0cc5703b2d76/ssh-known-hosts-edpm-deployment/0.log" Nov 24 19:01:10 crc kubenswrapper[4768]: I1124 19:01:10.707238 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a70c965c-d29f-4286-b2e4-a580073783c5/tempest-tests-tempest-tests-runner/0.log" Nov 24 19:01:10 crc kubenswrapper[4768]: I1124 19:01:10.794623 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_40331542-20c7-4f93-8571-cc1bcaad9d48/test-operator-logs-container/0.log" Nov 24 19:01:10 crc kubenswrapper[4768]: I1124 19:01:10.897764 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-84jr8_0ca0ce9c-abe8-49c5-9aed-d63e4bae7811/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 19:01:35 crc kubenswrapper[4768]: I1124 19:01:35.107734 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/util/0.log" Nov 24 19:01:35 crc kubenswrapper[4768]: I1124 19:01:35.839868 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/util/0.log" Nov 24 19:01:35 crc kubenswrapper[4768]: I1124 19:01:35.895138 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/pull/0.log" Nov 24 19:01:35 crc kubenswrapper[4768]: I1124 19:01:35.915325 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/pull/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.058018 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/pull/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.075336 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/util/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.112982 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97921fde98405fcf786bdfad979ee1b6baaacc76ca58626fa30c19a39am96dn_339cc82e-8ca6-4822-b5b5-48be6f45f30c/extract/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.239301 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-wtd7r_c6d746c7-cf41-4ebd-95ba-e23836f6e5d4/kube-rbac-proxy/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.281375 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-nx9kk_ab197189-f8ba-4b06-b62a-73dd90994a39/kube-rbac-proxy/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.324939 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-wtd7r_c6d746c7-cf41-4ebd-95ba-e23836f6e5d4/manager/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.488938 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-nx9kk_ab197189-f8ba-4b06-b62a-73dd90994a39/manager/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.492443 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-jg4mn_52de35ae-ab63-4e1b-88d1-e42033ee56b7/kube-rbac-proxy/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.510648 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-jg4mn_52de35ae-ab63-4e1b-88d1-e42033ee56b7/manager/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.634103 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-69fbff6fff-t2zl8_28171867-a10a-4f0c-840d-ce55038bcd93/kube-rbac-proxy/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.756377 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-69fbff6fff-t2zl8_28171867-a10a-4f0c-840d-ce55038bcd93/manager/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.814106 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-xw2jj_afa155f0-dde8-4d99-a454-527207b3189c/manager/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.845284 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-xw2jj_afa155f0-dde8-4d99-a454-527207b3189c/kube-rbac-proxy/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.907373 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-k5fkx_34b164fd-5d2f-4c00-83dc-ad8a90f4b94c/kube-rbac-proxy/0.log" Nov 24 19:01:36 crc kubenswrapper[4768]: I1124 19:01:36.997053 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-k5fkx_34b164fd-5d2f-4c00-83dc-ad8a90f4b94c/manager/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.072132 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858778c9dc-2wljz_b44a0f95-c792-4375-9292-34a95608c64f/kube-rbac-proxy/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.224050 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858778c9dc-2wljz_b44a0f95-c792-4375-9292-34a95608c64f/manager/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.235257 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-m6skf_ab3b5e40-6284-45cb-822e-a9490b1794c5/kube-rbac-proxy/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.327723 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-m6skf_ab3b5e40-6284-45cb-822e-a9490b1794c5/manager/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.431569 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-5sprh_8d6fc3b4-896a-4480-9371-930a2882151e/kube-rbac-proxy/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.475325 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-5sprh_8d6fc3b4-896a-4480-9371-930a2882151e/manager/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.554269 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-b6vk2_8d92c413-b62d-4896-ae13-1ee9608aa65a/kube-rbac-proxy/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.639757 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-b6vk2_8d92c413-b62d-4896-ae13-1ee9608aa65a/manager/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.710608 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-846gl_2c04229f-5a27-4477-816d-60d5f1977144/kube-rbac-proxy/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.744494 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-846gl_2c04229f-5a27-4477-816d-60d5f1977144/manager/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.839853 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-hdfsr_7a599ec7-7361-4e08-8d81-3cfc208d41b5/kube-rbac-proxy/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.913846 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-hdfsr_7a599ec7-7361-4e08-8d81-3cfc208d41b5/manager/0.log" Nov 24 19:01:37 crc kubenswrapper[4768]: I1124 19:01:37.977269 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-4mqdl_583db3d6-5f9c-4ce1-8214-06963fe50f96/kube-rbac-proxy/0.log" Nov 24 19:01:38 crc kubenswrapper[4768]: I1124 19:01:38.125762 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-4mqdl_583db3d6-5f9c-4ce1-8214-06963fe50f96/manager/0.log" Nov 24 19:01:38 crc kubenswrapper[4768]: I1124 19:01:38.130860 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-f95nv_29ac0137-f29a-4a1f-8435-f4ec688a5948/kube-rbac-proxy/0.log" Nov 24 19:01:38 crc kubenswrapper[4768]: I1124 19:01:38.191071 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-f95nv_29ac0137-f29a-4a1f-8435-f4ec688a5948/manager/0.log" Nov 24 19:01:38 crc kubenswrapper[4768]: I1124 19:01:38.302420 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-lv927_d54c925d-91d6-4bb8-acff-623c4f213352/manager/0.log" Nov 24 19:01:38 crc kubenswrapper[4768]: I1124 19:01:38.317979 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-lv927_d54c925d-91d6-4bb8-acff-623c4f213352/kube-rbac-proxy/0.log" Nov 24 19:01:38 crc kubenswrapper[4768]: I1124 19:01:38.917529 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7b874cbcf5-5ssbf_029c591e-99fb-494c-93f1-c695b2b8b744/operator/0.log" Nov 24 19:01:38 crc kubenswrapper[4768]: I1124 19:01:38.930925 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xx6dm_911161df-90b7-4df2-93d4-9e91b2bf2e91/registry-server/0.log" Nov 24 19:01:38 crc kubenswrapper[4768]: I1124 19:01:38.974038 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-fz64p_0f74f3df-ed63-4105-882e-c3122177da3a/kube-rbac-proxy/0.log" Nov 24 19:01:39 crc kubenswrapper[4768]: I1124 19:01:39.189732 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-2t64b_78e75462-3120-4d07-a571-56727914e173/kube-rbac-proxy/0.log" Nov 24 19:01:39 crc kubenswrapper[4768]: I1124 19:01:39.201581 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-fz64p_0f74f3df-ed63-4105-882e-c3122177da3a/manager/0.log" Nov 24 19:01:39 crc kubenswrapper[4768]: I1124 19:01:39.204003 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-2t64b_78e75462-3120-4d07-a571-56727914e173/manager/0.log" Nov 24 19:01:39 crc kubenswrapper[4768]: I1124 19:01:39.456618 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-csz8k_dfa124f2-a194-4cae-bfed-eb56288e56a6/operator/0.log" Nov 24 19:01:39 crc kubenswrapper[4768]: I1124 19:01:39.513175 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-4dwgz_8fe91de1-efe8-43e5-8b29-89043d06e880/kube-rbac-proxy/0.log" Nov 24 19:01:39 crc kubenswrapper[4768]: I1124 19:01:39.614307 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-4dwgz_8fe91de1-efe8-43e5-8b29-89043d06e880/manager/0.log" Nov 24 19:01:39 crc kubenswrapper[4768]: I1124 19:01:39.798932 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-lfbgz_4d4b069e-80e6-409b-aeee-130ac4351f32/kube-rbac-proxy/0.log" Nov 24 19:01:39 crc kubenswrapper[4768]: I1124 19:01:39.808711 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bdb766b46-6b4tf_ba241c62-4e0e-4e9b-bff9-4f590d0a1d28/manager/0.log" Nov 24 19:01:39 crc kubenswrapper[4768]: I1124 19:01:39.811795 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-lfbgz_4d4b069e-80e6-409b-aeee-130ac4351f32/manager/0.log" Nov 24 19:01:39 crc kubenswrapper[4768]: I1124 19:01:39.963047 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-d2hdv_1f0a9442-916e-442d-bb0f-6060ba5915c8/kube-rbac-proxy/0.log" Nov 24 19:01:39 crc kubenswrapper[4768]: I1124 19:01:39.984433 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-d2hdv_1f0a9442-916e-442d-bb0f-6060ba5915c8/manager/0.log" Nov 24 19:01:40 crc kubenswrapper[4768]: I1124 19:01:40.013776 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-2264q_c6d6eee2-6cb1-411d-837f-921b1c6c92fb/kube-rbac-proxy/0.log" Nov 24 19:01:40 crc kubenswrapper[4768]: I1124 19:01:40.039791 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-2264q_c6d6eee2-6cb1-411d-837f-921b1c6c92fb/manager/0.log" Nov 24 19:01:43 crc kubenswrapper[4768]: I1124 19:01:43.657023 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:01:43 crc kubenswrapper[4768]: I1124 19:01:43.657931 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:01:58 crc kubenswrapper[4768]: I1124 19:01:58.804690 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rrct4_622d16ca-1d8c-49e7-8ad7-c7b33b9003f2/control-plane-machine-set-operator/0.log" Nov 24 19:01:58 crc kubenswrapper[4768]: I1124 19:01:58.981844 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xxlhx_f4312574-3ae8-49f4-a799-e20198b71149/kube-rbac-proxy/0.log" Nov 24 19:01:59 crc kubenswrapper[4768]: I1124 19:01:59.010441 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xxlhx_f4312574-3ae8-49f4-a799-e20198b71149/machine-api-operator/0.log" Nov 24 19:02:11 crc kubenswrapper[4768]: I1124 19:02:11.169771 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-66xg6_24caa3d8-4ce8-4918-82c5-2c71e2b95e01/cert-manager-controller/0.log" Nov 24 19:02:11 crc kubenswrapper[4768]: I1124 19:02:11.313659 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-8nrg2_62d5c0eb-892b-455f-8ddd-b2fdb47ea42d/cert-manager-cainjector/0.log" Nov 24 19:02:11 crc kubenswrapper[4768]: I1124 19:02:11.371305 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-2qvx7_3d150fe0-3a31-4024-b158-8dd172e9aa1e/cert-manager-webhook/0.log" Nov 24 19:02:13 crc kubenswrapper[4768]: I1124 19:02:13.656754 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:02:13 crc kubenswrapper[4768]: I1124 19:02:13.657178 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:02:22 crc kubenswrapper[4768]: I1124 19:02:22.969629 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-hnwzz_07b3a9eb-7a3b-4f8c-b205-0becb2a0168b/nmstate-console-plugin/0.log" Nov 24 19:02:23 crc kubenswrapper[4768]: I1124 19:02:23.142181 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qxtkj_70c8f860-b6e0-4407-bfd8-be567169db2c/nmstate-handler/0.log" Nov 24 19:02:23 crc kubenswrapper[4768]: I1124 19:02:23.143139 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-676sm_822888f3-7b2d-48e4-a58e-42885dd6edf0/kube-rbac-proxy/0.log" Nov 24 19:02:23 crc kubenswrapper[4768]: I1124 19:02:23.209444 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-676sm_822888f3-7b2d-48e4-a58e-42885dd6edf0/nmstate-metrics/0.log" Nov 24 19:02:23 crc kubenswrapper[4768]: I1124 19:02:23.353004 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-65z5p_2de3be4f-3f3a-4789-ad93-341bc12f368e/nmstate-operator/0.log" Nov 24 19:02:23 crc kubenswrapper[4768]: I1124 19:02:23.387363 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-fdltj_204f91a8-34ab-4a27-96eb-1602cb1f1ed8/nmstate-webhook/0.log" Nov 24 19:02:37 crc kubenswrapper[4768]: I1124 19:02:37.408856 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-8wcfs_d270c276-5cc7-40cb-a690-27a3e3b5d29a/kube-rbac-proxy/0.log" Nov 24 19:02:37 crc kubenswrapper[4768]: I1124 19:02:37.474998 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-8wcfs_d270c276-5cc7-40cb-a690-27a3e3b5d29a/controller/0.log" Nov 24 19:02:37 crc kubenswrapper[4768]: I1124 19:02:37.550780 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-frr-files/0.log" Nov 24 19:02:37 crc kubenswrapper[4768]: I1124 19:02:37.736724 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-frr-files/0.log" Nov 24 19:02:37 crc kubenswrapper[4768]: I1124 19:02:37.758302 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-reloader/0.log" Nov 24 19:02:37 crc kubenswrapper[4768]: I1124 19:02:37.768248 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-metrics/0.log" Nov 24 19:02:37 crc kubenswrapper[4768]: I1124 19:02:37.803998 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-reloader/0.log" Nov 24 19:02:37 crc kubenswrapper[4768]: I1124 19:02:37.921661 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-frr-files/0.log" Nov 24 19:02:37 crc kubenswrapper[4768]: I1124 19:02:37.955539 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-metrics/0.log" Nov 24 19:02:37 crc kubenswrapper[4768]: I1124 19:02:37.955610 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-reloader/0.log" Nov 24 19:02:37 crc kubenswrapper[4768]: I1124 19:02:37.985029 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-metrics/0.log" Nov 24 19:02:38 crc kubenswrapper[4768]: I1124 19:02:38.172311 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-frr-files/0.log" Nov 24 19:02:38 crc kubenswrapper[4768]: I1124 19:02:38.179829 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-reloader/0.log" Nov 24 19:02:38 crc kubenswrapper[4768]: I1124 19:02:38.188546 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/cp-metrics/0.log" Nov 24 19:02:38 crc kubenswrapper[4768]: I1124 19:02:38.217179 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/controller/0.log" Nov 24 19:02:38 crc kubenswrapper[4768]: I1124 19:02:38.384068 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/frr-metrics/0.log" Nov 24 19:02:38 crc kubenswrapper[4768]: I1124 19:02:38.390143 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/kube-rbac-proxy/0.log" Nov 24 19:02:38 crc kubenswrapper[4768]: I1124 19:02:38.453794 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/kube-rbac-proxy-frr/0.log" Nov 24 19:02:38 crc kubenswrapper[4768]: I1124 19:02:38.637270 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/reloader/0.log" Nov 24 19:02:38 crc kubenswrapper[4768]: I1124 19:02:38.683408 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-bmlh2_d52b407a-4b4f-47ce-9cc4-244b3fca2db4/frr-k8s-webhook-server/0.log" Nov 24 19:02:38 crc kubenswrapper[4768]: I1124 19:02:38.898741 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65d776c5c5-mm52q_59812c96-7130-431b-8e63-08a04a76a481/manager/0.log" Nov 24 19:02:39 crc kubenswrapper[4768]: I1124 19:02:39.683270 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xj9kr_d1e6e133-4775-411b-b0e1-516e2cd2e276/kube-rbac-proxy/0.log" Nov 24 19:02:39 crc kubenswrapper[4768]: I1124 19:02:39.711457 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-ddc448d79-8bqsf_60867050-3f57-4b08-ace3-524c54adfeff/webhook-server/0.log" Nov 24 19:02:39 crc kubenswrapper[4768]: I1124 19:02:39.747245 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-br7zz_cbca4cc0-b37d-4521-8c37-706beb2a4030/frr/0.log" Nov 24 19:02:40 crc kubenswrapper[4768]: I1124 19:02:40.123578 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xj9kr_d1e6e133-4775-411b-b0e1-516e2cd2e276/speaker/0.log" Nov 24 19:02:43 crc kubenswrapper[4768]: I1124 19:02:43.656408 4768 patch_prober.go:28] interesting pod/machine-config-daemon-ljwzj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:02:43 crc kubenswrapper[4768]: I1124 19:02:43.657128 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:02:43 crc kubenswrapper[4768]: I1124 19:02:43.657191 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" Nov 24 19:02:43 crc kubenswrapper[4768]: I1124 19:02:43.658087 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47"} pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 19:02:43 crc kubenswrapper[4768]: I1124 19:02:43.658158 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerName="machine-config-daemon" containerID="cri-o://7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" gracePeriod=600 Nov 24 19:02:43 crc kubenswrapper[4768]: E1124 19:02:43.805086 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:02:44 crc kubenswrapper[4768]: I1124 19:02:44.525945 4768 generic.go:334] "Generic (PLEG): container finished" podID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" exitCode=0 Nov 24 19:02:44 crc kubenswrapper[4768]: I1124 19:02:44.525996 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" event={"ID":"423ac327-22e2-4cc9-ba57-a1b2fc6f4bda","Type":"ContainerDied","Data":"7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47"} Nov 24 19:02:44 crc kubenswrapper[4768]: I1124 19:02:44.526050 4768 scope.go:117] "RemoveContainer" containerID="6b3fe524df55b78a58eafd6e6ba92acc5e18774135a2707d5c571dc2e8a1d97a" Nov 24 19:02:44 crc kubenswrapper[4768]: I1124 19:02:44.526855 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:02:44 crc kubenswrapper[4768]: E1124 19:02:44.527288 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:02:52 crc kubenswrapper[4768]: I1124 19:02:52.671825 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/util/0.log" Nov 24 19:02:52 crc kubenswrapper[4768]: I1124 19:02:52.815907 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/util/0.log" Nov 24 19:02:52 crc kubenswrapper[4768]: I1124 19:02:52.835709 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/pull/0.log" Nov 24 19:02:52 crc kubenswrapper[4768]: I1124 19:02:52.857453 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/pull/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.036814 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/extract/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.041524 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/pull/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.043777 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e87xrg_1624fb3c-139b-48e7-9b52-36f82ffacfa6/util/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.213944 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/extract-utilities/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.357969 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/extract-utilities/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.384187 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/extract-content/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.403155 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/extract-content/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.569835 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/extract-content/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.584533 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/extract-utilities/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.726335 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mp249_0808f00d-bd89-4029-a8f1-3c81c1b9b4cb/registry-server/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.822971 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/extract-utilities/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.964097 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/extract-content/0.log" Nov 24 19:02:53 crc kubenswrapper[4768]: I1124 19:02:53.965175 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/extract-utilities/0.log" Nov 24 19:02:54 crc kubenswrapper[4768]: I1124 19:02:54.006585 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/extract-content/0.log" Nov 24 19:02:54 crc kubenswrapper[4768]: I1124 19:02:54.126153 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/extract-utilities/0.log" Nov 24 19:02:54 crc kubenswrapper[4768]: I1124 19:02:54.129472 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/extract-content/0.log" Nov 24 19:02:54 crc kubenswrapper[4768]: I1124 19:02:54.343948 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/util/0.log" Nov 24 19:02:54 crc kubenswrapper[4768]: I1124 19:02:54.518988 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/util/0.log" Nov 24 19:02:54 crc kubenswrapper[4768]: I1124 19:02:54.585865 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/pull/0.log" Nov 24 19:02:54 crc kubenswrapper[4768]: I1124 19:02:54.610693 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/pull/0.log" Nov 24 19:02:54 crc kubenswrapper[4768]: I1124 19:02:54.738093 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/util/0.log" Nov 24 19:02:54 crc kubenswrapper[4768]: I1124 19:02:54.829808 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/pull/0.log" Nov 24 19:02:54 crc kubenswrapper[4768]: I1124 19:02:54.852267 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68s588_57e06364-1ec6-4ed6-b123-c52044bd3adb/extract/0.log" Nov 24 19:02:54 crc kubenswrapper[4768]: I1124 19:02:54.856750 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8n975_e5b8263d-5b26-40f8-a344-761b9d19d252/registry-server/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.030197 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vtrzd_d17e8f38-c1cf-4774-ad10-d2e08512c158/marketplace-operator/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.061559 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/extract-utilities/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.227043 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/extract-utilities/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.242765 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/extract-content/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.260809 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/extract-content/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.421061 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/extract-utilities/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.436587 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/extract-content/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.548263 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/extract-utilities/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.624586 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qzlbb_66dab92d-4fda-4b03-82a4-9ceb5638b114/registry-server/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.732081 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/extract-content/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.738116 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/extract-utilities/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.738845 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/extract-content/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.864138 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/extract-content/0.log" Nov 24 19:02:55 crc kubenswrapper[4768]: I1124 19:02:55.880986 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/extract-utilities/0.log" Nov 24 19:02:56 crc kubenswrapper[4768]: I1124 19:02:56.218292 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-87bxg_8db46565-c403-4103-8399-23942d4198b9/registry-server/0.log" Nov 24 19:02:58 crc kubenswrapper[4768]: I1124 19:02:58.898978 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:02:58 crc kubenswrapper[4768]: E1124 19:02:58.899838 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:03:10 crc kubenswrapper[4768]: I1124 19:03:10.900640 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:03:10 crc kubenswrapper[4768]: E1124 19:03:10.903451 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:03:15 crc kubenswrapper[4768]: E1124 19:03:15.046671 4768 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.58:43178->38.102.83.58:42411: read tcp 38.102.83.58:43178->38.102.83.58:42411: read: connection reset by peer Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.139240 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5zvl6"] Nov 24 19:03:19 crc kubenswrapper[4768]: E1124 19:03:19.140353 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2efc3794-f03f-469c-9882-bad25688c861" containerName="keystone-cron" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.140372 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2efc3794-f03f-469c-9882-bad25688c861" containerName="keystone-cron" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.140675 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2efc3794-f03f-469c-9882-bad25688c861" containerName="keystone-cron" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.142396 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.151078 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zvl6"] Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.271030 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f145d8d-21b0-4892-a971-89b93f46b850-utilities\") pod \"redhat-marketplace-5zvl6\" (UID: \"4f145d8d-21b0-4892-a971-89b93f46b850\") " pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.271148 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f145d8d-21b0-4892-a971-89b93f46b850-catalog-content\") pod \"redhat-marketplace-5zvl6\" (UID: \"4f145d8d-21b0-4892-a971-89b93f46b850\") " pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.271220 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw4df\" (UniqueName: \"kubernetes.io/projected/4f145d8d-21b0-4892-a971-89b93f46b850-kube-api-access-xw4df\") pod \"redhat-marketplace-5zvl6\" (UID: \"4f145d8d-21b0-4892-a971-89b93f46b850\") " pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.372954 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f145d8d-21b0-4892-a971-89b93f46b850-utilities\") pod \"redhat-marketplace-5zvl6\" (UID: \"4f145d8d-21b0-4892-a971-89b93f46b850\") " pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.373091 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f145d8d-21b0-4892-a971-89b93f46b850-catalog-content\") pod \"redhat-marketplace-5zvl6\" (UID: \"4f145d8d-21b0-4892-a971-89b93f46b850\") " pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.373172 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw4df\" (UniqueName: \"kubernetes.io/projected/4f145d8d-21b0-4892-a971-89b93f46b850-kube-api-access-xw4df\") pod \"redhat-marketplace-5zvl6\" (UID: \"4f145d8d-21b0-4892-a971-89b93f46b850\") " pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.373656 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f145d8d-21b0-4892-a971-89b93f46b850-utilities\") pod \"redhat-marketplace-5zvl6\" (UID: \"4f145d8d-21b0-4892-a971-89b93f46b850\") " pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.373778 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f145d8d-21b0-4892-a971-89b93f46b850-catalog-content\") pod \"redhat-marketplace-5zvl6\" (UID: \"4f145d8d-21b0-4892-a971-89b93f46b850\") " pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.400565 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw4df\" (UniqueName: \"kubernetes.io/projected/4f145d8d-21b0-4892-a971-89b93f46b850-kube-api-access-xw4df\") pod \"redhat-marketplace-5zvl6\" (UID: \"4f145d8d-21b0-4892-a971-89b93f46b850\") " pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:19 crc kubenswrapper[4768]: I1124 19:03:19.468782 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:20 crc kubenswrapper[4768]: I1124 19:03:20.081032 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zvl6"] Nov 24 19:03:20 crc kubenswrapper[4768]: I1124 19:03:20.873256 4768 generic.go:334] "Generic (PLEG): container finished" podID="4f145d8d-21b0-4892-a971-89b93f46b850" containerID="d7b78cdd15baa7b3fda5a969326209ffc3f7c43204fe219380082c1510f9afa7" exitCode=0 Nov 24 19:03:20 crc kubenswrapper[4768]: I1124 19:03:20.873354 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zvl6" event={"ID":"4f145d8d-21b0-4892-a971-89b93f46b850","Type":"ContainerDied","Data":"d7b78cdd15baa7b3fda5a969326209ffc3f7c43204fe219380082c1510f9afa7"} Nov 24 19:03:20 crc kubenswrapper[4768]: I1124 19:03:20.873644 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zvl6" event={"ID":"4f145d8d-21b0-4892-a971-89b93f46b850","Type":"ContainerStarted","Data":"4fa035db3f42e4744e0f144956373cde3b0e356a2ccd57e145ca07c06abd3512"} Nov 24 19:03:22 crc kubenswrapper[4768]: I1124 19:03:22.901850 4768 generic.go:334] "Generic (PLEG): container finished" podID="4f145d8d-21b0-4892-a971-89b93f46b850" containerID="b4f85b63deaa32081b20a4d58e307aeac7fdb606f91ae3f86669039ea2b9413f" exitCode=0 Nov 24 19:03:22 crc kubenswrapper[4768]: I1124 19:03:22.901928 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zvl6" event={"ID":"4f145d8d-21b0-4892-a971-89b93f46b850","Type":"ContainerDied","Data":"b4f85b63deaa32081b20a4d58e307aeac7fdb606f91ae3f86669039ea2b9413f"} Nov 24 19:03:23 crc kubenswrapper[4768]: I1124 19:03:23.900794 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:03:23 crc kubenswrapper[4768]: E1124 19:03:23.901697 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:03:23 crc kubenswrapper[4768]: I1124 19:03:23.917364 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zvl6" event={"ID":"4f145d8d-21b0-4892-a971-89b93f46b850","Type":"ContainerStarted","Data":"1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051"} Nov 24 19:03:23 crc kubenswrapper[4768]: I1124 19:03:23.947261 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5zvl6" podStartSLOduration=2.459249227 podStartE2EDuration="4.947233114s" podCreationTimestamp="2025-11-24 19:03:19 +0000 UTC" firstStartedPulling="2025-11-24 19:03:20.875455673 +0000 UTC m=+4439.736037450" lastFinishedPulling="2025-11-24 19:03:23.36343956 +0000 UTC m=+4442.224021337" observedRunningTime="2025-11-24 19:03:23.937967674 +0000 UTC m=+4442.798549471" watchObservedRunningTime="2025-11-24 19:03:23.947233114 +0000 UTC m=+4442.807814901" Nov 24 19:03:29 crc kubenswrapper[4768]: I1124 19:03:29.469050 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:29 crc kubenswrapper[4768]: I1124 19:03:29.470968 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:29 crc kubenswrapper[4768]: I1124 19:03:29.523296 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:30 crc kubenswrapper[4768]: I1124 19:03:30.034150 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:30 crc kubenswrapper[4768]: I1124 19:03:30.105299 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zvl6"] Nov 24 19:03:31 crc kubenswrapper[4768]: I1124 19:03:31.989032 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5zvl6" podUID="4f145d8d-21b0-4892-a971-89b93f46b850" containerName="registry-server" containerID="cri-o://1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051" gracePeriod=2 Nov 24 19:03:32 crc kubenswrapper[4768]: I1124 19:03:32.446986 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:32 crc kubenswrapper[4768]: I1124 19:03:32.515857 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f145d8d-21b0-4892-a971-89b93f46b850-catalog-content\") pod \"4f145d8d-21b0-4892-a971-89b93f46b850\" (UID: \"4f145d8d-21b0-4892-a971-89b93f46b850\") " Nov 24 19:03:32 crc kubenswrapper[4768]: I1124 19:03:32.516009 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f145d8d-21b0-4892-a971-89b93f46b850-utilities\") pod \"4f145d8d-21b0-4892-a971-89b93f46b850\" (UID: \"4f145d8d-21b0-4892-a971-89b93f46b850\") " Nov 24 19:03:32 crc kubenswrapper[4768]: I1124 19:03:32.516201 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw4df\" (UniqueName: \"kubernetes.io/projected/4f145d8d-21b0-4892-a971-89b93f46b850-kube-api-access-xw4df\") pod \"4f145d8d-21b0-4892-a971-89b93f46b850\" (UID: \"4f145d8d-21b0-4892-a971-89b93f46b850\") " Nov 24 19:03:32 crc kubenswrapper[4768]: I1124 19:03:32.516783 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f145d8d-21b0-4892-a971-89b93f46b850-utilities" (OuterVolumeSpecName: "utilities") pod "4f145d8d-21b0-4892-a971-89b93f46b850" (UID: "4f145d8d-21b0-4892-a971-89b93f46b850"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:03:32 crc kubenswrapper[4768]: I1124 19:03:32.517255 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f145d8d-21b0-4892-a971-89b93f46b850-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:03:32 crc kubenswrapper[4768]: I1124 19:03:32.524693 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f145d8d-21b0-4892-a971-89b93f46b850-kube-api-access-xw4df" (OuterVolumeSpecName: "kube-api-access-xw4df") pod "4f145d8d-21b0-4892-a971-89b93f46b850" (UID: "4f145d8d-21b0-4892-a971-89b93f46b850"). InnerVolumeSpecName "kube-api-access-xw4df". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:03:32 crc kubenswrapper[4768]: I1124 19:03:32.540049 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f145d8d-21b0-4892-a971-89b93f46b850-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f145d8d-21b0-4892-a971-89b93f46b850" (UID: "4f145d8d-21b0-4892-a971-89b93f46b850"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:03:32 crc kubenswrapper[4768]: I1124 19:03:32.618108 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw4df\" (UniqueName: \"kubernetes.io/projected/4f145d8d-21b0-4892-a971-89b93f46b850-kube-api-access-xw4df\") on node \"crc\" DevicePath \"\"" Nov 24 19:03:32 crc kubenswrapper[4768]: I1124 19:03:32.618142 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f145d8d-21b0-4892-a971-89b93f46b850-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.001987 4768 generic.go:334] "Generic (PLEG): container finished" podID="4f145d8d-21b0-4892-a971-89b93f46b850" containerID="1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051" exitCode=0 Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.002047 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zvl6" event={"ID":"4f145d8d-21b0-4892-a971-89b93f46b850","Type":"ContainerDied","Data":"1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051"} Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.002085 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zvl6" event={"ID":"4f145d8d-21b0-4892-a971-89b93f46b850","Type":"ContainerDied","Data":"4fa035db3f42e4744e0f144956373cde3b0e356a2ccd57e145ca07c06abd3512"} Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.002107 4768 scope.go:117] "RemoveContainer" containerID="1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051" Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.002276 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zvl6" Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.041932 4768 scope.go:117] "RemoveContainer" containerID="b4f85b63deaa32081b20a4d58e307aeac7fdb606f91ae3f86669039ea2b9413f" Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.043288 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zvl6"] Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.051662 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zvl6"] Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.623931 4768 scope.go:117] "RemoveContainer" containerID="d7b78cdd15baa7b3fda5a969326209ffc3f7c43204fe219380082c1510f9afa7" Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.680465 4768 scope.go:117] "RemoveContainer" containerID="1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051" Nov 24 19:03:33 crc kubenswrapper[4768]: E1124 19:03:33.681114 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051\": container with ID starting with 1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051 not found: ID does not exist" containerID="1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051" Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.681153 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051"} err="failed to get container status \"1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051\": rpc error: code = NotFound desc = could not find container \"1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051\": container with ID starting with 1b293c9382174fb3cf4feb0ffcb77211c8c667224ef7f24a1e26905d19883051 not found: ID does not exist" Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.681179 4768 scope.go:117] "RemoveContainer" containerID="b4f85b63deaa32081b20a4d58e307aeac7fdb606f91ae3f86669039ea2b9413f" Nov 24 19:03:33 crc kubenswrapper[4768]: E1124 19:03:33.681443 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f85b63deaa32081b20a4d58e307aeac7fdb606f91ae3f86669039ea2b9413f\": container with ID starting with b4f85b63deaa32081b20a4d58e307aeac7fdb606f91ae3f86669039ea2b9413f not found: ID does not exist" containerID="b4f85b63deaa32081b20a4d58e307aeac7fdb606f91ae3f86669039ea2b9413f" Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.681473 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f85b63deaa32081b20a4d58e307aeac7fdb606f91ae3f86669039ea2b9413f"} err="failed to get container status \"b4f85b63deaa32081b20a4d58e307aeac7fdb606f91ae3f86669039ea2b9413f\": rpc error: code = NotFound desc = could not find container \"b4f85b63deaa32081b20a4d58e307aeac7fdb606f91ae3f86669039ea2b9413f\": container with ID starting with b4f85b63deaa32081b20a4d58e307aeac7fdb606f91ae3f86669039ea2b9413f not found: ID does not exist" Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.681508 4768 scope.go:117] "RemoveContainer" containerID="d7b78cdd15baa7b3fda5a969326209ffc3f7c43204fe219380082c1510f9afa7" Nov 24 19:03:33 crc kubenswrapper[4768]: E1124 19:03:33.682015 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b78cdd15baa7b3fda5a969326209ffc3f7c43204fe219380082c1510f9afa7\": container with ID starting with d7b78cdd15baa7b3fda5a969326209ffc3f7c43204fe219380082c1510f9afa7 not found: ID does not exist" containerID="d7b78cdd15baa7b3fda5a969326209ffc3f7c43204fe219380082c1510f9afa7" Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.682047 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b78cdd15baa7b3fda5a969326209ffc3f7c43204fe219380082c1510f9afa7"} err="failed to get container status \"d7b78cdd15baa7b3fda5a969326209ffc3f7c43204fe219380082c1510f9afa7\": rpc error: code = NotFound desc = could not find container \"d7b78cdd15baa7b3fda5a969326209ffc3f7c43204fe219380082c1510f9afa7\": container with ID starting with d7b78cdd15baa7b3fda5a969326209ffc3f7c43204fe219380082c1510f9afa7 not found: ID does not exist" Nov 24 19:03:33 crc kubenswrapper[4768]: I1124 19:03:33.909472 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f145d8d-21b0-4892-a971-89b93f46b850" path="/var/lib/kubelet/pods/4f145d8d-21b0-4892-a971-89b93f46b850/volumes" Nov 24 19:03:36 crc kubenswrapper[4768]: I1124 19:03:36.898996 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:03:36 crc kubenswrapper[4768]: E1124 19:03:36.900015 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:03:51 crc kubenswrapper[4768]: I1124 19:03:51.907265 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:03:51 crc kubenswrapper[4768]: E1124 19:03:51.908542 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:04:05 crc kubenswrapper[4768]: I1124 19:04:05.907283 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:04:05 crc kubenswrapper[4768]: E1124 19:04:05.910210 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:04:18 crc kubenswrapper[4768]: I1124 19:04:18.899212 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:04:18 crc kubenswrapper[4768]: E1124 19:04:18.900106 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:04:29 crc kubenswrapper[4768]: I1124 19:04:29.899147 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:04:29 crc kubenswrapper[4768]: E1124 19:04:29.900455 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:04:40 crc kubenswrapper[4768]: I1124 19:04:40.898861 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:04:40 crc kubenswrapper[4768]: E1124 19:04:40.899858 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:04:44 crc kubenswrapper[4768]: I1124 19:04:44.827211 4768 generic.go:334] "Generic (PLEG): container finished" podID="d0ccf541-cacf-4978-9dee-a43cb81c501f" containerID="0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91" exitCode=0 Nov 24 19:04:44 crc kubenswrapper[4768]: I1124 19:04:44.827317 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7chgt/must-gather-t5vgx" event={"ID":"d0ccf541-cacf-4978-9dee-a43cb81c501f","Type":"ContainerDied","Data":"0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91"} Nov 24 19:04:44 crc kubenswrapper[4768]: I1124 19:04:44.828536 4768 scope.go:117] "RemoveContainer" containerID="0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91" Nov 24 19:04:45 crc kubenswrapper[4768]: I1124 19:04:45.769639 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7chgt_must-gather-t5vgx_d0ccf541-cacf-4978-9dee-a43cb81c501f/gather/0.log" Nov 24 19:04:54 crc kubenswrapper[4768]: I1124 19:04:54.899248 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:04:54 crc kubenswrapper[4768]: E1124 19:04:54.900232 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:04:55 crc kubenswrapper[4768]: I1124 19:04:55.936775 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7chgt/must-gather-t5vgx"] Nov 24 19:04:55 crc kubenswrapper[4768]: I1124 19:04:55.937127 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7chgt/must-gather-t5vgx" podUID="d0ccf541-cacf-4978-9dee-a43cb81c501f" containerName="copy" containerID="cri-o://f1038efc18d46a85e5e271ae8eb29d475d0a848342fe92ca098e05aee9d7d04c" gracePeriod=2 Nov 24 19:04:55 crc kubenswrapper[4768]: I1124 19:04:55.944648 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7chgt/must-gather-t5vgx"] Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.387944 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7chgt_must-gather-t5vgx_d0ccf541-cacf-4978-9dee-a43cb81c501f/copy/0.log" Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.389172 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/must-gather-t5vgx" Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.495095 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d0ccf541-cacf-4978-9dee-a43cb81c501f-must-gather-output\") pod \"d0ccf541-cacf-4978-9dee-a43cb81c501f\" (UID: \"d0ccf541-cacf-4978-9dee-a43cb81c501f\") " Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.495436 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wqlc\" (UniqueName: \"kubernetes.io/projected/d0ccf541-cacf-4978-9dee-a43cb81c501f-kube-api-access-6wqlc\") pod \"d0ccf541-cacf-4978-9dee-a43cb81c501f\" (UID: \"d0ccf541-cacf-4978-9dee-a43cb81c501f\") " Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.502830 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0ccf541-cacf-4978-9dee-a43cb81c501f-kube-api-access-6wqlc" (OuterVolumeSpecName: "kube-api-access-6wqlc") pod "d0ccf541-cacf-4978-9dee-a43cb81c501f" (UID: "d0ccf541-cacf-4978-9dee-a43cb81c501f"). InnerVolumeSpecName "kube-api-access-6wqlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.598368 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wqlc\" (UniqueName: \"kubernetes.io/projected/d0ccf541-cacf-4978-9dee-a43cb81c501f-kube-api-access-6wqlc\") on node \"crc\" DevicePath \"\"" Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.624324 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0ccf541-cacf-4978-9dee-a43cb81c501f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d0ccf541-cacf-4978-9dee-a43cb81c501f" (UID: "d0ccf541-cacf-4978-9dee-a43cb81c501f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.702738 4768 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d0ccf541-cacf-4978-9dee-a43cb81c501f-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.939177 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7chgt_must-gather-t5vgx_d0ccf541-cacf-4978-9dee-a43cb81c501f/copy/0.log" Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.939786 4768 generic.go:334] "Generic (PLEG): container finished" podID="d0ccf541-cacf-4978-9dee-a43cb81c501f" containerID="f1038efc18d46a85e5e271ae8eb29d475d0a848342fe92ca098e05aee9d7d04c" exitCode=143 Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.939906 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7chgt/must-gather-t5vgx" Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.940011 4768 scope.go:117] "RemoveContainer" containerID="f1038efc18d46a85e5e271ae8eb29d475d0a848342fe92ca098e05aee9d7d04c" Nov 24 19:04:56 crc kubenswrapper[4768]: I1124 19:04:56.971277 4768 scope.go:117] "RemoveContainer" containerID="0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91" Nov 24 19:04:57 crc kubenswrapper[4768]: I1124 19:04:57.026372 4768 scope.go:117] "RemoveContainer" containerID="f1038efc18d46a85e5e271ae8eb29d475d0a848342fe92ca098e05aee9d7d04c" Nov 24 19:04:57 crc kubenswrapper[4768]: E1124 19:04:57.027717 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1038efc18d46a85e5e271ae8eb29d475d0a848342fe92ca098e05aee9d7d04c\": container with ID starting with f1038efc18d46a85e5e271ae8eb29d475d0a848342fe92ca098e05aee9d7d04c not found: ID does not exist" containerID="f1038efc18d46a85e5e271ae8eb29d475d0a848342fe92ca098e05aee9d7d04c" Nov 24 19:04:57 crc kubenswrapper[4768]: I1124 19:04:57.028077 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1038efc18d46a85e5e271ae8eb29d475d0a848342fe92ca098e05aee9d7d04c"} err="failed to get container status \"f1038efc18d46a85e5e271ae8eb29d475d0a848342fe92ca098e05aee9d7d04c\": rpc error: code = NotFound desc = could not find container \"f1038efc18d46a85e5e271ae8eb29d475d0a848342fe92ca098e05aee9d7d04c\": container with ID starting with f1038efc18d46a85e5e271ae8eb29d475d0a848342fe92ca098e05aee9d7d04c not found: ID does not exist" Nov 24 19:04:57 crc kubenswrapper[4768]: I1124 19:04:57.028113 4768 scope.go:117] "RemoveContainer" containerID="0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91" Nov 24 19:04:57 crc kubenswrapper[4768]: E1124 19:04:57.030006 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91\": container with ID starting with 0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91 not found: ID does not exist" containerID="0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91" Nov 24 19:04:57 crc kubenswrapper[4768]: I1124 19:04:57.030030 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91"} err="failed to get container status \"0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91\": rpc error: code = NotFound desc = could not find container \"0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91\": container with ID starting with 0137933b1042b3581a285680d0947677f48a85d510b732d3d2b3083b47892e91 not found: ID does not exist" Nov 24 19:04:57 crc kubenswrapper[4768]: I1124 19:04:57.923214 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0ccf541-cacf-4978-9dee-a43cb81c501f" path="/var/lib/kubelet/pods/d0ccf541-cacf-4978-9dee-a43cb81c501f/volumes" Nov 24 19:05:07 crc kubenswrapper[4768]: I1124 19:05:07.899249 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:05:07 crc kubenswrapper[4768]: E1124 19:05:07.904683 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:05:18 crc kubenswrapper[4768]: I1124 19:05:18.898639 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:05:18 crc kubenswrapper[4768]: E1124 19:05:18.899390 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:05:29 crc kubenswrapper[4768]: I1124 19:05:29.900468 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:05:29 crc kubenswrapper[4768]: E1124 19:05:29.901776 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:05:42 crc kubenswrapper[4768]: I1124 19:05:42.900651 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:05:42 crc kubenswrapper[4768]: E1124 19:05:42.901654 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:05:56 crc kubenswrapper[4768]: I1124 19:05:56.899529 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:05:56 crc kubenswrapper[4768]: E1124 19:05:56.901012 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:06:01 crc kubenswrapper[4768]: I1124 19:06:01.211266 4768 scope.go:117] "RemoveContainer" containerID="658ca2c941d98bc2e8469f1e154c4d13c447062b1f3fb3a390141707551c875a" Nov 24 19:06:08 crc kubenswrapper[4768]: I1124 19:06:08.898225 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:06:08 crc kubenswrapper[4768]: E1124 19:06:08.898888 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.088122 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9nq6b"] Nov 24 19:06:12 crc kubenswrapper[4768]: E1124 19:06:12.089147 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0ccf541-cacf-4978-9dee-a43cb81c501f" containerName="copy" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.089165 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0ccf541-cacf-4978-9dee-a43cb81c501f" containerName="copy" Nov 24 19:06:12 crc kubenswrapper[4768]: E1124 19:06:12.089185 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f145d8d-21b0-4892-a971-89b93f46b850" containerName="extract-content" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.089193 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f145d8d-21b0-4892-a971-89b93f46b850" containerName="extract-content" Nov 24 19:06:12 crc kubenswrapper[4768]: E1124 19:06:12.089221 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0ccf541-cacf-4978-9dee-a43cb81c501f" containerName="gather" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.089244 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0ccf541-cacf-4978-9dee-a43cb81c501f" containerName="gather" Nov 24 19:06:12 crc kubenswrapper[4768]: E1124 19:06:12.089288 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f145d8d-21b0-4892-a971-89b93f46b850" containerName="extract-utilities" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.089298 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f145d8d-21b0-4892-a971-89b93f46b850" containerName="extract-utilities" Nov 24 19:06:12 crc kubenswrapper[4768]: E1124 19:06:12.089313 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f145d8d-21b0-4892-a971-89b93f46b850" containerName="registry-server" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.089321 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f145d8d-21b0-4892-a971-89b93f46b850" containerName="registry-server" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.089584 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0ccf541-cacf-4978-9dee-a43cb81c501f" containerName="copy" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.089605 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f145d8d-21b0-4892-a971-89b93f46b850" containerName="registry-server" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.089623 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0ccf541-cacf-4978-9dee-a43cb81c501f" containerName="gather" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.092833 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.117889 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5955944f-9c11-422c-9738-bcdbb710314c-catalog-content\") pod \"certified-operators-9nq6b\" (UID: \"5955944f-9c11-422c-9738-bcdbb710314c\") " pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.117960 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5955944f-9c11-422c-9738-bcdbb710314c-utilities\") pod \"certified-operators-9nq6b\" (UID: \"5955944f-9c11-422c-9738-bcdbb710314c\") " pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.118055 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wzrb\" (UniqueName: \"kubernetes.io/projected/5955944f-9c11-422c-9738-bcdbb710314c-kube-api-access-5wzrb\") pod \"certified-operators-9nq6b\" (UID: \"5955944f-9c11-422c-9738-bcdbb710314c\") " pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.120254 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nq6b"] Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.220038 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5955944f-9c11-422c-9738-bcdbb710314c-utilities\") pod \"certified-operators-9nq6b\" (UID: \"5955944f-9c11-422c-9738-bcdbb710314c\") " pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.220163 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wzrb\" (UniqueName: \"kubernetes.io/projected/5955944f-9c11-422c-9738-bcdbb710314c-kube-api-access-5wzrb\") pod \"certified-operators-9nq6b\" (UID: \"5955944f-9c11-422c-9738-bcdbb710314c\") " pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.220284 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5955944f-9c11-422c-9738-bcdbb710314c-catalog-content\") pod \"certified-operators-9nq6b\" (UID: \"5955944f-9c11-422c-9738-bcdbb710314c\") " pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.220921 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5955944f-9c11-422c-9738-bcdbb710314c-catalog-content\") pod \"certified-operators-9nq6b\" (UID: \"5955944f-9c11-422c-9738-bcdbb710314c\") " pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.221245 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5955944f-9c11-422c-9738-bcdbb710314c-utilities\") pod \"certified-operators-9nq6b\" (UID: \"5955944f-9c11-422c-9738-bcdbb710314c\") " pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.244787 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wzrb\" (UniqueName: \"kubernetes.io/projected/5955944f-9c11-422c-9738-bcdbb710314c-kube-api-access-5wzrb\") pod \"certified-operators-9nq6b\" (UID: \"5955944f-9c11-422c-9738-bcdbb710314c\") " pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.417941 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:12 crc kubenswrapper[4768]: I1124 19:06:12.900262 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nq6b"] Nov 24 19:06:13 crc kubenswrapper[4768]: I1124 19:06:13.789234 4768 generic.go:334] "Generic (PLEG): container finished" podID="5955944f-9c11-422c-9738-bcdbb710314c" containerID="6a9b32aea4bc03de059df14ae317d0a2d318a0712a8ce0ace34f79c09572015f" exitCode=0 Nov 24 19:06:13 crc kubenswrapper[4768]: I1124 19:06:13.789322 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nq6b" event={"ID":"5955944f-9c11-422c-9738-bcdbb710314c","Type":"ContainerDied","Data":"6a9b32aea4bc03de059df14ae317d0a2d318a0712a8ce0ace34f79c09572015f"} Nov 24 19:06:13 crc kubenswrapper[4768]: I1124 19:06:13.789786 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nq6b" event={"ID":"5955944f-9c11-422c-9738-bcdbb710314c","Type":"ContainerStarted","Data":"c49949c8e5dc7a905b738746861acb6ebc47c78f8d698c3163c8f560421cba57"} Nov 24 19:06:13 crc kubenswrapper[4768]: I1124 19:06:13.792740 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 19:06:15 crc kubenswrapper[4768]: I1124 19:06:15.813728 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nq6b" event={"ID":"5955944f-9c11-422c-9738-bcdbb710314c","Type":"ContainerStarted","Data":"4dc2820665eeecd388f536f1aee5bae15794c41e8c55591d19cb9266d06b1930"} Nov 24 19:06:16 crc kubenswrapper[4768]: I1124 19:06:16.828035 4768 generic.go:334] "Generic (PLEG): container finished" podID="5955944f-9c11-422c-9738-bcdbb710314c" containerID="4dc2820665eeecd388f536f1aee5bae15794c41e8c55591d19cb9266d06b1930" exitCode=0 Nov 24 19:06:16 crc kubenswrapper[4768]: I1124 19:06:16.828150 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nq6b" event={"ID":"5955944f-9c11-422c-9738-bcdbb710314c","Type":"ContainerDied","Data":"4dc2820665eeecd388f536f1aee5bae15794c41e8c55591d19cb9266d06b1930"} Nov 24 19:06:17 crc kubenswrapper[4768]: I1124 19:06:17.844387 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nq6b" event={"ID":"5955944f-9c11-422c-9738-bcdbb710314c","Type":"ContainerStarted","Data":"f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963"} Nov 24 19:06:17 crc kubenswrapper[4768]: I1124 19:06:17.876525 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9nq6b" podStartSLOduration=2.336178409 podStartE2EDuration="5.876506554s" podCreationTimestamp="2025-11-24 19:06:12 +0000 UTC" firstStartedPulling="2025-11-24 19:06:13.792411466 +0000 UTC m=+4612.652993243" lastFinishedPulling="2025-11-24 19:06:17.332739611 +0000 UTC m=+4616.193321388" observedRunningTime="2025-11-24 19:06:17.86821393 +0000 UTC m=+4616.728795707" watchObservedRunningTime="2025-11-24 19:06:17.876506554 +0000 UTC m=+4616.737088321" Nov 24 19:06:20 crc kubenswrapper[4768]: I1124 19:06:20.899067 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:06:20 crc kubenswrapper[4768]: E1124 19:06:20.899884 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:06:22 crc kubenswrapper[4768]: I1124 19:06:22.419007 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:22 crc kubenswrapper[4768]: I1124 19:06:22.419405 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:22 crc kubenswrapper[4768]: I1124 19:06:22.851305 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:22 crc kubenswrapper[4768]: I1124 19:06:22.966517 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:23 crc kubenswrapper[4768]: I1124 19:06:23.106280 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nq6b"] Nov 24 19:06:24 crc kubenswrapper[4768]: I1124 19:06:24.923219 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9nq6b" podUID="5955944f-9c11-422c-9738-bcdbb710314c" containerName="registry-server" containerID="cri-o://f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963" gracePeriod=2 Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.504204 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.530908 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5955944f-9c11-422c-9738-bcdbb710314c-utilities\") pod \"5955944f-9c11-422c-9738-bcdbb710314c\" (UID: \"5955944f-9c11-422c-9738-bcdbb710314c\") " Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.531199 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wzrb\" (UniqueName: \"kubernetes.io/projected/5955944f-9c11-422c-9738-bcdbb710314c-kube-api-access-5wzrb\") pod \"5955944f-9c11-422c-9738-bcdbb710314c\" (UID: \"5955944f-9c11-422c-9738-bcdbb710314c\") " Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.531659 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5955944f-9c11-422c-9738-bcdbb710314c-catalog-content\") pod \"5955944f-9c11-422c-9738-bcdbb710314c\" (UID: \"5955944f-9c11-422c-9738-bcdbb710314c\") " Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.532265 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5955944f-9c11-422c-9738-bcdbb710314c-utilities" (OuterVolumeSpecName: "utilities") pod "5955944f-9c11-422c-9738-bcdbb710314c" (UID: "5955944f-9c11-422c-9738-bcdbb710314c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.532834 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5955944f-9c11-422c-9738-bcdbb710314c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.538578 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5955944f-9c11-422c-9738-bcdbb710314c-kube-api-access-5wzrb" (OuterVolumeSpecName: "kube-api-access-5wzrb") pod "5955944f-9c11-422c-9738-bcdbb710314c" (UID: "5955944f-9c11-422c-9738-bcdbb710314c"). InnerVolumeSpecName "kube-api-access-5wzrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.589264 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5955944f-9c11-422c-9738-bcdbb710314c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5955944f-9c11-422c-9738-bcdbb710314c" (UID: "5955944f-9c11-422c-9738-bcdbb710314c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.635202 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5955944f-9c11-422c-9738-bcdbb710314c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.635260 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wzrb\" (UniqueName: \"kubernetes.io/projected/5955944f-9c11-422c-9738-bcdbb710314c-kube-api-access-5wzrb\") on node \"crc\" DevicePath \"\"" Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.938387 4768 generic.go:334] "Generic (PLEG): container finished" podID="5955944f-9c11-422c-9738-bcdbb710314c" containerID="f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963" exitCode=0 Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.938455 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nq6b" Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.938526 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nq6b" event={"ID":"5955944f-9c11-422c-9738-bcdbb710314c","Type":"ContainerDied","Data":"f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963"} Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.938602 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nq6b" event={"ID":"5955944f-9c11-422c-9738-bcdbb710314c","Type":"ContainerDied","Data":"c49949c8e5dc7a905b738746861acb6ebc47c78f8d698c3163c8f560421cba57"} Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.938632 4768 scope.go:117] "RemoveContainer" containerID="f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963" Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.964889 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nq6b"] Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.968551 4768 scope.go:117] "RemoveContainer" containerID="4dc2820665eeecd388f536f1aee5bae15794c41e8c55591d19cb9266d06b1930" Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.974931 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9nq6b"] Nov 24 19:06:25 crc kubenswrapper[4768]: I1124 19:06:25.997685 4768 scope.go:117] "RemoveContainer" containerID="6a9b32aea4bc03de059df14ae317d0a2d318a0712a8ce0ace34f79c09572015f" Nov 24 19:06:26 crc kubenswrapper[4768]: I1124 19:06:26.050243 4768 scope.go:117] "RemoveContainer" containerID="f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963" Nov 24 19:06:26 crc kubenswrapper[4768]: E1124 19:06:26.050782 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963\": container with ID starting with f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963 not found: ID does not exist" containerID="f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963" Nov 24 19:06:26 crc kubenswrapper[4768]: I1124 19:06:26.050854 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963"} err="failed to get container status \"f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963\": rpc error: code = NotFound desc = could not find container \"f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963\": container with ID starting with f74f8abfa24b14e257f97d92dc8023559f70d41ef8393a2ec4326e6f2c53d963 not found: ID does not exist" Nov 24 19:06:26 crc kubenswrapper[4768]: I1124 19:06:26.050899 4768 scope.go:117] "RemoveContainer" containerID="4dc2820665eeecd388f536f1aee5bae15794c41e8c55591d19cb9266d06b1930" Nov 24 19:06:26 crc kubenswrapper[4768]: E1124 19:06:26.051946 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc2820665eeecd388f536f1aee5bae15794c41e8c55591d19cb9266d06b1930\": container with ID starting with 4dc2820665eeecd388f536f1aee5bae15794c41e8c55591d19cb9266d06b1930 not found: ID does not exist" containerID="4dc2820665eeecd388f536f1aee5bae15794c41e8c55591d19cb9266d06b1930" Nov 24 19:06:26 crc kubenswrapper[4768]: I1124 19:06:26.051981 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc2820665eeecd388f536f1aee5bae15794c41e8c55591d19cb9266d06b1930"} err="failed to get container status \"4dc2820665eeecd388f536f1aee5bae15794c41e8c55591d19cb9266d06b1930\": rpc error: code = NotFound desc = could not find container \"4dc2820665eeecd388f536f1aee5bae15794c41e8c55591d19cb9266d06b1930\": container with ID starting with 4dc2820665eeecd388f536f1aee5bae15794c41e8c55591d19cb9266d06b1930 not found: ID does not exist" Nov 24 19:06:26 crc kubenswrapper[4768]: I1124 19:06:26.052001 4768 scope.go:117] "RemoveContainer" containerID="6a9b32aea4bc03de059df14ae317d0a2d318a0712a8ce0ace34f79c09572015f" Nov 24 19:06:26 crc kubenswrapper[4768]: E1124 19:06:26.052454 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a9b32aea4bc03de059df14ae317d0a2d318a0712a8ce0ace34f79c09572015f\": container with ID starting with 6a9b32aea4bc03de059df14ae317d0a2d318a0712a8ce0ace34f79c09572015f not found: ID does not exist" containerID="6a9b32aea4bc03de059df14ae317d0a2d318a0712a8ce0ace34f79c09572015f" Nov 24 19:06:26 crc kubenswrapper[4768]: I1124 19:06:26.052553 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a9b32aea4bc03de059df14ae317d0a2d318a0712a8ce0ace34f79c09572015f"} err="failed to get container status \"6a9b32aea4bc03de059df14ae317d0a2d318a0712a8ce0ace34f79c09572015f\": rpc error: code = NotFound desc = could not find container \"6a9b32aea4bc03de059df14ae317d0a2d318a0712a8ce0ace34f79c09572015f\": container with ID starting with 6a9b32aea4bc03de059df14ae317d0a2d318a0712a8ce0ace34f79c09572015f not found: ID does not exist" Nov 24 19:06:27 crc kubenswrapper[4768]: I1124 19:06:27.919928 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5955944f-9c11-422c-9738-bcdbb710314c" path="/var/lib/kubelet/pods/5955944f-9c11-422c-9738-bcdbb710314c/volumes" Nov 24 19:06:32 crc kubenswrapper[4768]: I1124 19:06:32.899072 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:06:32 crc kubenswrapper[4768]: E1124 19:06:32.899980 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:06:47 crc kubenswrapper[4768]: I1124 19:06:47.899398 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:06:47 crc kubenswrapper[4768]: E1124 19:06:47.901785 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:07:00 crc kubenswrapper[4768]: I1124 19:07:00.899435 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:07:00 crc kubenswrapper[4768]: E1124 19:07:00.900354 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:07:13 crc kubenswrapper[4768]: I1124 19:07:13.899423 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:07:13 crc kubenswrapper[4768]: E1124 19:07:13.900671 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:07:27 crc kubenswrapper[4768]: I1124 19:07:27.899653 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:07:27 crc kubenswrapper[4768]: E1124 19:07:27.901155 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda" Nov 24 19:07:41 crc kubenswrapper[4768]: I1124 19:07:41.901146 4768 scope.go:117] "RemoveContainer" containerID="7f790c0d988038c964f457311cc14f54056db356d82d5d7b2a546931b73e9d47" Nov 24 19:07:41 crc kubenswrapper[4768]: E1124 19:07:41.902524 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ljwzj_openshift-machine-config-operator(423ac327-22e2-4cc9-ba57-a1b2fc6f4bda)\"" pod="openshift-machine-config-operator/machine-config-daemon-ljwzj" podUID="423ac327-22e2-4cc9-ba57-a1b2fc6f4bda"